Indices in red state. "cannot allocate because all found copies of the shard are either stale or corrupt"

Hi, we are facing one issue where some of indices health not getting updated to yellow or green due to this error "cannot allocate because all found copies of the shard are either stale or corrupt". Can someone please guide how to fix this issue without any data loss.
we tried reroute api with retry_failed=true but that didn't work.
We are using Elasticsearch version 6.2.3

Please share more of the response you are getting.

Please note that version is EOL and no longer supported, you should be looking to upgrade as a matter of urgency.

Elasticsearch version 6.2.3 is EOL and no longer supported. Please upgrade ASAP.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

{
"index" : "IndexName",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "NODE_LEFT",
"at" : "2023-03-26T06:26:40.845Z",
"details" : "node_left[kKhS_E7FQDS_xVaLZTs4gg]",
"last_allocation_status" : "no_valid_shard_copy"
},
"can_allocate" : "no_valid_shard_copy",
"allocate_explanation" : "cannot allocate because all found copies of the shard are either stale or corrupt"
"node_allocation_decisions" : [
{
"node_id" : "22l8b-PaTAey6kfuWsx9uQ",
"node_name" : "node-c007-data-vm6",
"transport_address" : "10.7.11.16:9300",
"node_decision" : "no",
"store" : {
"found" : false
}
},

Is that node no longer part of the cluster? Do you have a replica?

that node is part of cluster. We do have replica copy of same index but that shows the status as cluster_Recovered.

Sorry, what do you mean by replica? Replica shard. right?

can someone please help fix this issue?

What's the output from this;

GET /_cluster/allocation/explain?pretty
{
  "index": "IndexName",
  "shard": 0,
  "primary": false
}

Sorry for late reply.
Here is the result I got.
primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"UnTTeuAkQsW_Qt_fyfiP1Q","node_name":"..-c007-data-vm13","transport_address":"...18:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"Y916v1p1SIaOwhTcBGqHDQ","node_name":"..-c007-data-vm11","transport_address":"...20:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"_25POYZISYaZkdUNAW0hfQ","node_name":"..-c007-data-vm8","transport_address":"...14:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"in7cl1tERVapS5n6EofsrQ","node_name":"..-c007-data-vm3","transport_address":"...12:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"jtFs4rt3S_e4VB6TSAhn1Q","node_name":"..-c007-data-vm16","transport_address":"...10:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"kKhS_E7FQDS_xVaLZTs4gg","node_name":"..-c007-data-vm10","transport_address":"...22:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"THROTTLE","explanation":"reached the limit of incoming shard recoveries [2], cluster setting [cluster.routing.allocation.node_concurrent_incoming_recoveries=2] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"}]},{"node_id":"tE8ueSj9TgeyapbURZqnMw","node_name":"..-c007-data-vm19","transport_address":"...23:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"ujM8E509TvysjIAbyslefQ","node_name":"..-c007-data-vm15","transport_address":"...13:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"wlE49OBsSnubzwCYlOQH2A","node_name":"..-c007-data-vm4","transport_address":"...17:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]},{"node_id":"xh21XA3_TzmsaamYMdZihQ","node_name":"..-c007-data-vm5","transport_address":"...11:9300","node_decision":"no","deciders":[{"decider":"replica_after_primary_active","decision":"NO","explanation":"primary shard for this replica is not yet active"},{"decider":"throttling","decision":"NO","explanation":"primary shard for this replica is not yet active"}]}]}

If all primaries and replicas are corrupted I am not sure that is possible. I would recommend reverting to a recent snapshot.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.