Unassigned shards on cluster restart

Hi, I am facing following issue:

  1. Restarted my Elasticsearch cluster from one node to three nodes (two data nodes and one arbiter). But the cluster status is now red as shown below:

$> curl "nodeA:9200/_cat/health"
1536037434 06:03:54 graylog-uat red 3 2 13 12 0 0 1 0 - 92.9%

  1. After some investigation, I landed upon the cluster allocation API and here is the output:

$> curl -X GET "nodeA:9200/_cluster/allocation/explain?pretty=true"`
{
"index":"graylog_0",
"shard":0,
"primary":true,
"current_state":"unassigned",
"unassigned_info":{
"reason":"CLUSTER_RECOVERED",
"at":"2018-09-03T12:15:27.463Z",
"last_allocation_status":"no_valid_shard_copy"
},
"can_allocate":"no_valid_shard_copy",
"allocate_explanation":"cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster",
"node_allocation_decisions":[
{
"node_id":"8W_0X_3UTY6twG7ZssfdTg",
"node_name":"nodeB",
"transport_address":"X.X.X.X:9301",
"node_decision":"no",
"store":{
"found":false
}
},
{
"node_id":"c_K4VUSCQk-QV1mlquurhA",
"node_name":"nodeA",
"transport_address":"X.X.X.X:9301",
"node_decision":"no",
"store":{
"found":false
}
}
]
}

  1. I can also see, in the log below, the index 'graylog_0' getting deleted. If such is the case why is ES even searching for the said index.

$> zgrep "graylog_0" *
elasticsearch-2018-08-30.log.gz:2018-08-30 08:19:45,493 | INFO | o.e.c.r.a.AllocationService | Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[graylog_0][0]] ...]). elasticsearch-2018-08-30.log.gz:2018-08-30 13:25:11,234 | INFO | o.e.c.m.MetaDataDeleteIndexService | [graylog_0/HO0jj6QwTUmVaJn3OZQyZQ] deleting index

Elasticsearch version: 5.6.8

Could anyone point me to the right direction? Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.