Replicas shards of Default indexers are in UNASSIGNED State

Hi Team,

Recently we are facing a issue with replicas shards and all default indexers replica shards are in UNASSIGNED State.

due to this cluster is going to yellow and red state.

index shard prirep state
.tasks 0 r UNASSIGNED
.kibana-event-log-7.11.0-000019 0 r UNASSIGNED
.monitoring-es-7-2023.06.25 0 r UNASSIGNED
.ds-.logs-deprecation.elasticsearch-default-2022.07.21-000003 0 r UNASSIGNED
.kibana_task_manager_7.15.2_001 0 r UNASSIGNED

can any one please help me how to get this UNASSIGNED replica shared to ASSIGNED state or do we have a option to remove the replicas of default indexers.

Thanks,
Sathish Thumma.

Can anyone, please give me some solution on above topic.

i have notices mainly below indexers replicas are not getting assigned

  1. .ds-.logs-deprecation.elasticsearch-default ****
  2. .kibana*****
  3. .monitoring-es **
  4. .apm **
index shard prirep state node unassigned.reason
.apm-agent-configuration 0 r UNASSIGNED REPLICA_ADDED
.kibana_7.17.4_reindex_temp 0 r UNASSIGNED CLUSTER_RECOVERED
.kibana_7.15.0_001 0 r UNASSIGNED REPLICA_ADDED
.async-search 0 r UNASSIGNED REPLICA_ADDED
.monitoring-es-7-2023.06.27 0 r UNASSIGNED REPLICA_ADDED
.tasks 0 r UNASSIGNED REPLICA_ADDED
.monitoring-es-7-2023.06.28 0 r UNASSIGNED REPLICA_ADDED
.ds-.logs-deprecation.elasticsearch-default-2022.06.23-000001 0 r UNASSIGNED CLUSTER_RECOVERED

Thanks,
Sathish Thumma

Which version of Elasticsearch are you using?

How many nodes do you have in your cluster?

How are these configured, e.g. with respect to different roles?

we are using version 7.17.10, and total 41 nodes(5 Master, 35 data), as per my know we didn't configured any roles

by using

curl -X GET xxxxxxxxx:9200/_cluster/allocation/explain?pretty" -H 'Content-Type: application/json' -d'
{
"index": ".kibana_7.15.0_001",
"shard": 0,
"primary": false
}
'

this i can able to see

{
"index" : ".kibana_7.15.0_001",
"shard" : 0,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "REPLICA_ADDED",
"at" : "2023-06-26T17:12:19.599Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "xxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxx:xxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xx.xx"]"
}
]
},
{

Please share the entire response for the allocation explain for that shard, you shared only part of it.

Also share the result of the following request: GET /_cluster/settings

1 Like

{
"index" : ".kibana_7.15.0_001",
"shard" : 0,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "REPLICA_ADDED",
"at" : "2023-06-26T17:12:19.599Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156862464",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156932096",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156809216",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156813312",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156907520",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156862464",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156809216",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156739584",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [9.429708100306696%]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156907520",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156862464",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156903424",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156694528",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156874752",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156694528",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [9.342281105565426%]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156694528",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156932096",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156809216",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156932096",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156874752",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [8.509359847064598%]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156874752",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156903424",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156903424",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156739584",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "a copy of this shard is already allocated to this node [[.kibana_7.15.0_001][0], node[oFyaWzxdQHqEay6fG4d_GQ], [P], s[STARTED], a[id=2XrjM3NeSAKTxkge_MFdnQ]]"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [9.914021061903375%]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156739584",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [8.838070635661516%]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156813312",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156895232",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156907520",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
},
{
"node_id" : "xxxxxxxxxxxxxxxxxxxx",
"node_name" : "xxxxxxxxxxxxxxxxxxxxxxx",
"transport_address" : "xxxxxxxxx:xxxxxxxxx",
"node_attributes" : {
"ml.machine_memory" : "404156813312",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "33766506496",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"
}
]
}
]
}

_cluster/settings

{
"persistent": {
"cluster": {
"routing": {
"allocation": {
"include": {
"_ip": "xx.xx.xx.xxx"
},
"exclude": {
"_ip": ""
}
}
}
},
"search": {
"max_open_scroll_context": "5000"
},
"xpack": {
"monitoring": {
"collection": {
"enabled": "true"
}
}
}
},
"transient": {}
}

IP address which is in log

node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"

is same in /_cluster/settings

None of your nodes can allocate your shard because you are filtering their ip address and some that can allocate does not have enough space.

If you look at the response for the allocation explain you will see that the majority of your nodes can not allocate this shard because of this:

node does not cluster setting [cluster.routing.allocation.include] filters [_ip:"xx.xx.xxx.xx"]"

This means that the node IP address does not match the IP address in the include filter.

A couple of the nodes that match the ip being filtered cannot allocate because the disk already reached the low watermark:

the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [9.914021061903375%]

And another one obvious cannot allocate because it already has the primary shard:

a copy of this shard is already allocated to this node [[.kibana_7.15.0_001][0], node[oFyaWzxdQHqEay6fG4d_GQ], [P], s[STARTED], a[id=2XrjM3NeSAKTxkge_MFdnQ]]

Thanks a lot for understanding me what is the exact issue

for cluster.routing.allocation.include seeting is it ok to use below option

PUT /_cluster/settings
{
"persistent": {
"cluster.routing.allocation.include._ip": "*"
}
}

and for disk watermark, can we use below commands to resolve this issue.

cluster.routing.allocation.disk.threshold_enabled: true

cluster.routing.allocation.disk.watermark.flood_stage: 5gb

cluster.routing.allocation.disk.watermark.low: 30gb

cluster.routing.allocation.disk.watermark.high: 20gb

What is the size of storage per node?

What is your average shard size?

Storage per node = 4TB
Average Shard size = 25GB

The default watermark is set at 15% of capacity, which would in your case be 600GB. Given that merging can cause shard sizes to almost double before disk space is reclaimed I think the values you provided are far too aggressive. I can see that you may want it lower than the default 600GB, but I would recommend lowering it to maybe half or a quarter of that, not to 30GB.

Thank you, for your suggestion, i will set low as 300GB,
and can you please let me know, is there settings i need to make a part from this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.