Hi,
I just restored the ES snapshot, but however I get all the restored indices in red status.
When I call GET _cluster/allocation/explain , I get the following:
{
"index" : "componentinformation",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "NEW_INDEX_RESTORED",
"at" : "2022-10-20T09:15:03.302Z",
"details" : "restore_source[rep_2/monthly-snapshot-2022.07.01]",
"last_allocation_status" : "no"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "cx52o2FQRnmCJ4ReU8xCUQ",
"node_name" : "node-ELK3",
"transport_address" : "10.116.39.197:9300",
"node_attributes" : {
"rack" : "r1c",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"weight_ranking" : 1,
"deciders" : [
{
"decider" : "restore_in_progress",
"decision" : "NO",
"explanation" : "shard has failed to be restored from the snapshot [rep_2:monthly-snapshot-2022.07.01/CbcV4EviSt-Hg3pDILGUHQ] because of [restore_source[rep_2/monthly-snapshot-2022.07.01]] - manually close or delete the index [aoi-componentinformation-20220520] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [9.270049883195354%]"
}
]
},
{
"node_id" : "nQtgMGm8RjGJkrYmOH9lLw",
"node_name" : "datanode-1",
"transport_address" : "10.116.37.151:9300",
"node_attributes" : {
"rack" : "r1a",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"weight_ranking" : 2,
"deciders" : [
{
"decider" : "restore_in_progress",
"decision" : "NO",
"explanation" : "shard has failed to be restored from the snapshot [rep_2:monthly-snapshot-2022.07.01/CbcV4EviSt-Hg3pDILGUHQ] because of [restore_source[rep_2/monthly-snapshot-2022.07.01]] - manually close or delete the index [componentinformation] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [3.7204337764371793%]"
}
]
},
{
"node_id" : "AlnFm0gKRtit-rYm2jFfFA",
"node_name" : "node-ELK2",
"transport_address" : "10.116.37.201:9300",
"node_attributes" : {
"rack" : "r1a",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"weight_ranking" : 3,
"deciders" : [
{
"decider" : "restore_in_progress",
"decision" : "NO",
"explanation" : "shard has failed to be restored from the snapshot [rep_2:monthly-snapshot-2022.07.01/CbcV4EviSt-Hg3pDILGUHQ] because of [restore_source[rep_2/monthly-snapshot-2022.07.01]] - manually close or delete the index [aoi-componentinformation-20220520] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [14.17961261243759%]"
}
]
},
{
"node_id" : "2qfDaKt9Rgei_3hvscmIrA",
"node_name" : "datanode-2",
"transport_address" : "10.116.39.168:9300",
"node_attributes" : {
"rack" : "r1c",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"weight_ranking" : 4,
"deciders" : [
{
"decider" : "restore_in_progress",
"decision" : "NO",
"explanation" : "shard has failed to be restored from the snapshot [rep_2:monthly-snapshot-2022.07.01/CbcV4EviSt-Hg3pDILGUHQ] because of [restore_source[rep_2/monthly-snapshot-2022.07.01g]] - manually close or delete the index [componentinformation] in order o retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [7.377034012598388%]"
}
]
},
{
"node_id" : "b4UtklaXQmuycSgwlikE7Q",
"node_name" : "nodel-ELK4",
"transport_address" : "10.116.38.189:9300",
"node_attributes" : {
"rack" : "r1a",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"weight_ranking" : 5,
"deciders" : [
{
"decider" : "restore_in_progress",
"decision" : "NO",
"explanation" : "shard has failed to be restored from the snapshot [rep_2:monthly-snapshot-2022.07.01-nxzygk4itqmizhyld6aoeg/CbcV4EviSt-Hg3pDILGUHQ] because of [restore_source[rep_2/monthly-snapshot-2022.07.01]] - manually close or delete the index [componentinformation] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [12.035006081673684%]"
}
]
}
]
}
We are hosting solition on EC2 cluster, do you think adding more memory will resolve the issue?
Or it is something related with ES/jvm configs?