Replica shards unassigned after Shards allocation filter applied

I want to upgrade operating system for my elasticsearch cluster so I Planned not to give downtime to my cluster so I followed below steps.

  1. pass below command to reallocate shards for one server from cluster,

PUT _cluster/settings
{
"transient" : {
"cluster.routing.allocation.exclude._ip" : "10.0.0.1"
}
}
then we shutdown the server

this command works find and able to reallocate primary shards but issue is it not able reallocate replica shards and cluster is in yellow state.

even after starting the server cluster is in yellow state with replica shards unassigned.

Details:
NO of master: 3
No of data: 3
Es version: 5.1

I also tried using below command:

PUT /_cluster/settings
{
"transient" : {
"cluster.routing.rebalance.enable" : "all"
}
}

can someone pls help

It sounds as if shard allocation has been turned off in the cluster (that would explain the unassigned shards) so you could try to run the following command:

PUT _cluster/settings
{
    "transient": {
        "cluster.routing.allocation.enable": "all"
    }
}

to enable shard allocation. If that doesn't work, you'll need to debug the situation by running

GET /_cluster/allocation/explain

It will list information about the unassigned shard and specifically why it hasn't been assigned to a node. For more info check out the Explain API.

Another possible cause, given that you're doing allocation filtering, is that the elasticsearch.yml file on the restarted node has a typo in its node.attr.zone field. I have never had this problem myself but I assume that if the zone-value doesn't match any of the acceptable filtering values, as specified in cluster.routing.allocation.awareness.force.zone.values, the node won't be getting any shards.

I put "cluster.routing.allocation.enable": "all"

then I get below

{
"persistent": {},
"transient": {
"cluster": {
"routing": {
"rebalance": {
"enable": "all"
},
"allocation": {
"include": {
"_name": "c049cxm-data-9212"
},
"enable": "all",
"exclude": {
"_name": "c049cxm-data-9212"
}
}
}
},
"indices": {
"ttl": {
"interval": "20s"
}
}
}
}

All nodes are getting primary shards but none of them are taking replica shards.

out put of /expain:

{
"shard": {
"index": "send.php",
"index_uuid": "Y4m4HQ59S_a8x5CeY_-kVQ",
"id": 1,
"primary": false
},
"assigned": false,
"shard_state_fetch_pending": false,
"unassigned_info": {
"reason": "NODE_LEFT",
"at": "2018-06-07T11:21:54.572Z",
"delayed": false,
"details": "node_left[Osrdttn7Sci3-VHxyNSL0A]",
"allocation_status": "no_attempt"
},
"allocation_delay_in_millis": 60000,
"remaining_delay_in_millis": 0,
"nodes": {
"Sh9O3Dp_QP-EleCIkZLfpA": {
"node_name": "c903gva-data-9216",
"node_attributes": {
"name": "c903gva-9216",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9416"
},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 33.176193,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
}
]
},
"17tO_MbNSECewUU1bUvZtw": {
"node_name": "c989vea-data-9217",
"node_attributes": {
"name": "c989vea-9217",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9417"
},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 15.076191,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
},
{
"decider": "same_shard",
"decision": "NO",
"explanation": "shard cannot be allocated on the same host [17tO_MbNSECewUU1bUvZtw] on which it already exists"
}
]
},
"y1xKA959RMK9Vu09hQy0ZQ": {
"node_name": "c989vea-data-9213",
"node_attributes": {
"name": "c989vea-9213",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9413"
},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 15.976191,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
},
{
"decider": "same_shard",
"decision": "NO",
"explanation": "shard cannot be allocated on the same host [y1xKA959RMK9Vu09hQy0ZQ] on which it already exists"
}
]
},
"E2OBXXNhRXeqpN8mI__1Zw": {
"node_name": "c049cxm-data-9216",
"node_attributes": {
"name": "c049cxm-9216",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9416"
},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 20.576193,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
}
]
},
"Bi8TvTurSxyPt51lrT3ByQ": {
"node_name": "c903gva-data-9215",
"node_attributes": {
"name": "c903gva-9215",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9415"
},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 33.176193,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
}
]
},
"bGcBT06CRK21cjnGI8WkFA": {
"node_name": "c049cxm-data-9213",
"node_attributes": {
"name": "c049cxm-9213",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9413"
},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 17.876192,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
}
]
},
"sFfevM0uTSm1nPyk7qus6A": {
"node_name": "c989vea-data-9216",
"node_attributes": {
"name": "c989vea-9216",
"appserver_instance": "PlatformSearch-es-Demo_5.1.2.16_9416"
},
"store": {
"shard_copy": "AVAILABLE"
},
"final_decision": "NO",
"final_explanation": "the shard cannot be assigned because allocation deciders return a NO decision",
"weight": 13.826192,
"decisions": [
{
"decider": "filter",
"decision": "NO",
"explanation": "node does not match global include filters [_name:"c049cxm-data-9212"]"
},

According to the explain it seems your filter excludes one or more nodes from shard allocation, so you should take a look at that.

yes, I had to upgrade Operating system(OS) and in one OS 5 ES nodes was running so I put a node based shard allocation command as below, which move all the primary shards to another nodes but replica become unassigned,

PUT _cluster/settings
{
"transient" : {
"cluster.routing.allocation.exclude._name" : "c049cxm-data-9212"
}
}

later I tried to include node but it is not working.

PUT _cluster/settings
{
"transient" : {
"cluster.routing.allocation.include._name" : "c049cxm-data-9212"
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.