Why shard unassigned after cluster restart completely?

I have a cluster with 3 nodes, all have node.master: true and node.data: true. Almost every index loss one or two shards either primary or replica after i restart three nodes 1 by one. All indices have settings of 5 primary and 1 replica. There had no errors in log files. In the end, i reroute these shard by allocate_empty_primary . I'm looking for the reason for this by read a lot of articles , but none of them matched my situation.

es version 5.4.3.
The explain API shows:
{
"index": "dcvs_nonmotorvehicle",
"shard": 3,
"primary": true,
"current_state": "unassigned",
"unassigned_info": {
"reason": "CLUSTER_RECOVERED",
"at": "2020-04-10T03:40:41.127Z",
"last_allocation_status": "no_valid_shard_copy"
},
"can_allocate": "no_valid_shard_copy",
"allocate_explanation": "cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster",
"node_allocation_decisions": [
{
"node_id": "TklXzLKySf-czdu8zZ5hyQ",
"node_name": "host2",
"transport_address": "host2:9300",
"node_attributes": {
"cname": "202",
"rack_id": "rack_two"
},
"node_decision": "no",
"store": {
"found": false
}
},
{
"node_id": "WDaA85bmQhKZxvgq4ve0Kw",
"node_name": "host3",
"transport_address": "host3:9300",
"node_attributes": {
"cname": "210",
"rack_id": "rack_three"
},
"node_decision": "no",
"store": {
"found": false
}
},
{
"node_id": "dAXXDGDyQaGcex6VcZP0eg",
"node_name": "host1",
"transport_address": "host1:9300",
"node_attributes": {
"cname": "201",
"rack_id": "rack_one"
},
"node_decision": "no",
"store": {
"found": false
}
}
]
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.