Shrink shards not respecting node settings 7.10.0

I have a 5 node cluster with 4 nodes as the data_warm, data_hot and 1 node as the cold storage.
When ES goes to shrink the node I see on some of the indexes it chooses the cold storage node (dx6MirFmT2y9AdXEF02pDw) which causes a conflict, since the tier_prefence cannot be matched with the required node.

    "allocation": {
      "include": {
        "_tier_preference": "data_warm,data_hot"
      "require": {
        "_id": "dx6MirFmT2y9AdXEF02pDw"

The node settings (id dx6MirFmT2y9AdXEF02pDw)

    "node" : {
      "attr" : {
        "temperature" : "cold",
        "transform" : {
          "node" : "false"
        "xpack" : {
          "installed" : "true"
      "name" : "es05",
      "roles" : "data_cold"

ILP Error

Waiting for node [dx6MirFmT2y9AdXEF02pDw] to contain [3] shards, found [0], remaining [3]

From GET /_cluster/allocation/explain?include_yes_decisions

      "decider" : "filter",
      "decision" : "NO",
      "explanation" : """node does not match index setting [index.routing.allocation.require] filters [_id:"dx6MirFmT2y9AdXEF02pDw"]"""

If I manually update the node settings to pick a different node then everything just starts working,
(this is not a permament solution)

"settings": {
  "index.routing.allocation.require._id": "wC6YgmrPQPqDqiDHqYjTAQ"

Is there a flaw in the shrink allocation, causing it to pick an invalid node?


Welcome to our community! :smiley:

Can you your share your ILM policy?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.