Unassigned primary + replica shard, minimise data loss

Hello, my cluster is currently in red state due to one of both the primary and replica shard of an index becoming unassigned. This happened after a number of large tasks were executed simultaneously by accident on the same index alias. The index in question that was affected was the current write index of that index alias. The index contains valuable data and I would like to try to minimise any data loss if possible, ideally with none.

Calling GET _cluster/allocation/explain on the the primary returns unassigned_info->reason:


and allocate_explanation:

"cannot allocate because all found copies of the shard are either stale or corrupt"

finally within unassigned_info->details:

""failed shard on node [<node_id>]: shard failure, reason [merge failed], failure NotSerializableExceptionWrapper[merge_exception: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?)..."

Calling the same on the replica returns:

unassigned_info->reason: "ALLOCATION_FAILED"

allocate_explanation:"cannot allocate because allocation is not permitted to any of the nodes"


"failed shard on node [<node_id>]: failed to perform indices:data/write/bulk[s] on replica [<index_name>][<shard_num>], node[<node_id>], [R], s[STARTED], a[id=<>], failure IndexShardClosedException[CurrentState[CLOSED] Primary closed.]"

I attempted a dry run of manually reallocating the replica using the reroute API, and received a status 400 with: "[allocate_replica] trying to allocate a replica shard [<index_name>][<shard_num>], while corresponding primary shard is still unassigned"

What is the best course of action here? I gather I need to assign the primary shard before I can do anything with the replica. I am concerned given the CorruptIndex exception that the primary shard (and potentially the replica too..) has suffered data losses, so my thinking was that recovering from the replica was my best bet? Is my understanding incorrect here / am I going to have to be content with data losses?

Many thanks

What version are you running?

That's not good and may indicate data loss. Do you have snapshots?

I'm running 7.13.2. And no I do not have any snapshots...

does anyone have any recommendations for how to proceed?

My current thinking was to follow an approach similar to: When everything else fails. We are using Elasticsearch on a Google… | by Remco Verhoef | Medium

Use the CheckIndex tool to see whether the shards are actually corrupt, and then proceed from there?

You may use the elasticsearch-shard cli tool to see if you can recover something, here is the documentation.

Be aware of this warning in the documentation.

You will lose the corrupted data when you run elasticsearch-shard . This tool should only be used as a last resort if there is no way to recover from another copy of the shard or restore a snapshot.

From what you shared there is not much else you can do since it appears that your index is corrupted and you may already have some level o data loss.


Thank you! Wish me luck...

The documentation explicitly says to "Stop Elasticsearch before running elasticsearch-shard."

Does this apply to the particular node or my entire cluster?

Never used this command, but I would assume that it is the Elasticsearch node that has the shard you want to try to fix.

Due to the size of our cluster, stopping Elasticsearch / reallocating an entire node is quite an operation - do you know of any method that might allow us to address the issue without doing so? I assume not but worth an ask.. Many thanks for all your help by the way :slight_smile:

The elasticsearch-shard command is already a last resort for cases like yours, and there is no guarantee that it will work, but to be able to try it you will need to shutdown this node.

1 Like

Ok, thanks for the info and your swift reply.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.