@warkolm
my apologies for any confusion...what I mean by "room" is, its the only node that is missing a shard. the unassigned shard will not allocate to other shards because currently there are 2 shards already assigned and I am only allowing 2 shards per node. Here is the output from allocation explained command:
{
"index" : "sessions2-200920",
"shard" : 3,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "REPLICA_ADDED",
"at" : "2020-09-21T06:11:14.548Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "9xxx99xxxbb8S1yDg6xxx",
"node_name" : "node2",
"transport_address" : "xxx.xxx.xx.xx:9300",
"node_attributes" : {
"ml.machine_memory" : "67190153216",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "shards_limit",
"decision" : "NO",
"explanation" : "too many shards [2] allocated to this node for index [sessions2-200920], index setting [index.routing.allocation.total_shards_per_node=2]"
}
]
},
{
"node_id" : "BVtxxxdvSxxxqXxeFLWxxx_lA",
"node_name" : "node1",
"transport_address" : "xxx.xxx.xx.xx:9300",
"node_attributes" : {
"ml.machine_memory" : "67190153216",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "shards_limit",
"decision" : "NO",
"explanation" : "too many shards [2] allocated to this node for index [sessions2-200920], index setting [index.routing.allocation.total_shards_per_node=2]"
}
]
},
{
"node_id" : "MxxxiYkaQMSpzgxxx_0S4xxxg",
"node_name" : "node3",
"transport_address" : "xxx.xxx.xx.xx:9300",
"node_attributes" : {
"ml.machine_memory" : "67190153216",
"xpack.installed" : "true",
"transform.node" : "true",
"ml.max_open_jobs" : "20"
},
"node_decision" : "no",
"store" : {
"matching_size_in_bytes" : 64147690805
},
"deciders" : [
{
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "a copy of this shard is already allocated to this node [[sessions2-200920][3], node[xxx3iYkaxxxpxxxLL_0Sxxx], [P], s[STARTED], a[id=xxxxTLXxxxxW2xcxjxtxHLUg]]"
}
]
},
{
"node_id" : "xxxGvfV3xxxmOZxxxw2bxxx",
"node_name" : "node4",
"transport_address" : "xxx.xxx.xx.xx:9300",
"node_attributes" : {
"ml.machine_memory" : "67190153216",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "shards_limit",
"decision" : "NO",
"explanation" : "too many shards [2] allocated to this node for index [sessions2-200920], index setting [index.routing.allocation.total_shards_per_node=2]"
}
]
}
]
}
From the output above the only way I can think of to resolve this is by moving a replica shard from its current node to node 3 and then moving unassigned to whichever node is available, expect for node 3.