How to fix UNASSIGNED shard in Elasticsearch

While monitoring Elasticsearch logs it was observed a shard failure, please see below:

The cluster (1 nodes) is in red state and while troubleshooting this it was found via curl localhost:9200/_cat/shards 1 unassigned shard (logstash-2015.09.08 0 p UNASSIGNED)

Is there any well known operational procedure to fix this shard/index issue?

I am using
elasticsearch-1.5.2-1.noarch
logstash-1.5.3-1.noarch
logstash-forwarder-0.4.0-1.x86_64

[2015-09-14 10:43:32,566][WARN ][cluster.action.shard ] [aws_elk_01] [logstash-2015.09.08][0] received shard failed for [logstash-2015.09.08][0], node[-bcyalWpTKy1wMRQnfW7uA], [P], s[INITIALIZING], indexUUID [U59vyxSySz6mPZrsh7lpUg], reason [shard failure [failed recovery][IndexShardGatewayRecoveryException[[logstash-2015.09.08][0] failed recovery]; nested: EngineCreationFailureException[[logstash-2015.09.08][0] failed to upgrade 3x segments]; nested: EOFException[read past EOF: NIOFSIndexInput(path="/var/elasticdata/elasticaws/nodes/0/indices/logstash-2015.09.08/0/index/segments_u")]; ]]
[2015-09-14 10:43:42,567][WARN ][indices.cluster ] [aws_elk_01] [[logstash-2015.09.08][0]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [logstash-2015.09.08][0] failed recovery
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:162)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: [logstash-2015.09.08][0] failed to upgrade 3x segments
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:121)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:32)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1262)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1257)
at org.elasticsearch.index.shard.IndexShard.prepareForTranslogRecovery(IndexShard.java:784)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:226)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112)
... 3 more
Caused by: java.io.EOFException: read past EOF: NIOFSIndexInput(path="/var/elasticdata/elasticaws/nodes/0/indices/logstash-2015.09.08/0/index/segments_u")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:98)
at org.apache.lucene.store.BufferedIndexInput.readInt(BufferedIndexInput.java:183)
at org.elasticsearch.common.lucene.Lucene.indexNeeds3xUpgrading(Lucene.java:767)
at org.elasticsearch.common.lucene.Lucene.upgradeLucene3xSegmentsMetadata(Lucene.java:778)
at org.elasticsearch.index.engine.InternalEngine.upgrade3xSegments(InternalEngine.java:1084)
at org.elasticsearch.ind

All the best,
Carlos

1 Like

Did this index come from an older cluster?

Hi

No. Only this index/shard was affected.
I had fixed by doing this
curl -XPOST -d '{ "commands" : [ { "allocate" : { "index" : "logstash-2015.09.08", "shard" : 0, "node" : "aws_elk_01", "allow_primary" : true } } ] }' http://localhost:9200/_cluster/reroute?pretty

Thank you for your input,

Carlos Fernando

I am having same problem, any idea why this happens?

Please start your own thread.

Hi,

No. Not clear to me why this happened. However after forcing the cluster to use the index with existing copy (as previously reported) not related issues have been observed.

Is it something that is happening to you recurrently?

All the best,
Carlos Fernando