Move shard to another node error

I need to move shards to another node.
I get the following error. The command is first and the error after it:

error to check

post _cluster/reroute
"commands": [
"move": {
"index": "yeshut_2022.03.31-000152",
"shard": 0,
"from_node": "S003.dom",
"to_node": "S017.dom"

"error": {
"root_cause": [
"type": "remote_transport_exception",
"reason": "[S001.dom][][cluster:admin/reroute]"
"type": "illegal_argument_exception",
"reason": "[move_allocation] can't move 0, from {S003.dom}{EDit5b-bT4-lTuPdJ-LJoA}{UOCe_9OfQC2E2NTUgJFD3w}{}{}{di}{storage_term=cold, xpack.installed=true}, to {S017.dom}{jLHVeaHzSvWkM_Z2X9Zvmg}{EnEF9qvDRWW5woL_J1365g}{}{}{di}{xpack.installed=true}, since its not allowed, reason: [YES(shard has no previous failures)][YES(primary shard for this replica is already active)][YES(explicitly ignoring any disabling of allocation due to manual allocation commands via the reroute API)][YES(can allocate replica shard to a node with version [7.5.1] since this is equal-or-newer than the primary version [7.5.1])][YES(the shard is not being snapshotted)][YES(ignored as shard is not being recovered from a snapshot)][NO(node does not match index setting [index.routing.allocation.require] filters [storage_term:"cold"])][YES(the shard does not exist on the same node)][YES(enough disk for shard on node, free: [1.8tb], shard size: [22.2gb], free after allocating shard: [1.8tb])][YES(below shard recovery limit of outgoing: [0 < 20] incoming: [0 < 20])][YES(total shard limits are disabled: [index: -1, cluster: -1] <= 0)][YES(allocation awareness is not enabled, set cluster setting [cluster.routing.allocation.awareness.attributes] to enable it)]"
"status": 400

You will either need to change the index to remove the allocation requirement or set this node as a cold node as well.

Could you please provide a detailed action plan for a begginner?

Depending on your configuration, you will need to either change the Index-level shard allocation filtering so that the index is no longer required to live on a "cold" node:

PUT yeshut_2022.03.31-000152/_settings
  "index.routing.allocation.require.storage_term": ""

Or, you will need to change the attributes of S017.dom to have the storage_term: cold attribute in the configuration file. Example from this link:

Specify the location of each node with a custom node attribute. For example, if you want Elasticsearch to distribute shards across different racks, you might set an awareness attribute called rack_id in each node’s elasticsearch.yml config file.

node.attr.rack_id: rack_one

So, in the elasticsearch.yml file on S017.dom, you will want to look for a setting that is similar to node.attr.storage_term:

If you run GET /_cat/nodeattrs?v and find the S017.dom node, does its attributes align with the other nodes appropriately?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.