Initializing Shards

Hello,

I have a three node cluster running ES 2.3.3. One of these was restarted by accident, since which time I have one index with two shards stuck in initializing state for the last day.

The indices are rolled over daily and this one is a week old, so it is not being written to and was not when the node restarted.

It is a small index (Store size: 2.6mb, Primary Store Size 1.6mb) and we have plenty of disk space on each node.

In this case, I can delete the whole index with no problems but I want to know how to fix this problem if it occurs with more important data.

These are the shards for that index. VAN-ELASTIC1 was the node which restarted:

root@van-elastic1 ~ ]# curl localhost:9200/_cat/shards/logstash-hsl-f5-2016.07.20
logstash-hsl-f5-2016.07.20 1 r STARTED 358 370.1kb 10.22.50.92 van-elastic2
logstash-hsl-f5-2016.07.20 1 p STARTED 358 341.2kb 10.22.50.93 van-elastic3
logstash-hsl-f5-2016.07.20 3 r INITIALIZING 10.22.50.92 van-elastic2
logstash-hsl-f5-2016.07.20 3 p STARTED 349 372.4kb 10.22.50.93 van-elastic3
logstash-hsl-f5-2016.07.20 4 r STARTED 345 350.4kb 10.22.50.92 van-elastic2
logstash-hsl-f5-2016.07.20 4 p STARTED 345 350.4kb 10.22.50.93 van-elastic3
logstash-hsl-f5-2016.07.20 2 r INITIALIZING 10.22.50.91 van-elastic1
logstash-hsl-f5-2016.07.20 2 p STARTED 346 311.6kb 10.22.50.93 van-elastic3
logstash-hsl-f5-2016.07.20 0 r STARTED 375 329.6kb 10.22.50.91 van-elastic1
logstash-hsl-f5-2016.07.20 0 p STARTED 375 329.6kb 10.22.50.92 van-elastic2

Both initializing shards are replicas. Should I delete those shards and if so, how? If not, how can I get them started?

Thank you.

Hello Geezer,

Could you please attach the logs of your nodes?

Thanks

Thank you for your reply. I have since done a synced flush and shut down all nodes. I adjusted the memory settings as I saw we were getting "Unable to lock JVM Memory: error=12,reason=Cannot allocate memory" warnings in the ES logs.

I made the changes recommended in this article:

http://mrzard.github.io/blog/2015/03/25/elasticsearch-enable-mlockall-in-centos-7/

When I restarted the ES service on all nodes, the status changed to green, so we are all good.

Thanks