More than 1 shard not getting assigned to a node

ES version - 6.8
Setup - 3 nodes with default rack_id

I am trying to logically separate out clusters inside the single cluster by using the rack_id awareness attribute, the problem I am facing is that only one shard is getting assigned per node within the default rack (which has 3 nodes).
If we want to have multiple shards of an index on a single node, is there any way of doing it?

{
  "index" : "racktest3",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2021-07-28T04:36:12.746Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "Jq9ziPd6RNCu84sp62Es8Q",
      "node_name" : "NODE1",
      "transport_address" : "NODE1:9300",
      "node_attributes" : {
        "rack_id" : "default",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[racktest3][0], node[Jq9ziPd6RNCu84sp62Es8Q], [P], s[STARTED], a[id=7NrHDQGEQ7aBXCUa4UdZzw]]"
        }
      ]
    },
    {
      "node_id" : "R5zPkbcXRRKCYXpDCscxKA",
      "node_name" : "NODE2",
      "transport_address" : "NODE2:9300",
      "node_attributes" : {
        "rack_id" : "default",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 2,
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[racktest3][0], node[R5zPkbcXRRKCYXpDCscxKA], [R], s[STARTED], a[id=9IAhcRAWReO9vSsf_uBDag]]"
        }
      ]
    },
    {
      "node_id" : "J3wOYnrrTJm4yQuuK4iU7g",
      "node_name" : "NODE3",
      "transport_address" : "NODE3:9300",
      "node_attributes" : {
        "rack_id" : "default",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 3,
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[racktest3][0], node[J3wOYnrrTJm4yQuuK4iU7g], [R], s[STARTED], a[id=ZmZ--EYkQPSjuYMAw9fzCA]]"
        }
      ]
    }
  ]
}

Please share the allocation configuration from your nodes as well as the configuration from the index the shards belongs to. Thanks!

HI,
The current cluster configuration is that we have 6 nodes in the cluster which are segregated into 2 using the rack_id attribute.

[rack_id=default]
17.1.4.19 node-1
17.1.4.7 node-2
17.1.4.24 node-3

[rack_id=rack1]
17.1.4.26 node-4
17.1.4.27 node-5
17.1.4.28 node-6

Our observation is that when we create an index with 3 shards & 0 replicas, the primary shards are assigned to the nodes of the assigned rack but when I create an index with 3 shards & 1 replica we notice that the replicas go unassigned.
Index setting. -

{"racktest1":{"settings":{"index":{"routing":{"allocation":{"include":{"rack_id":"rack1"}}},"number_of_shards":"3","provided_name":"racktest1","creation_date":"1628837329914","number_of_replicas":"1","uuid":"cgbMo6l-Tu2jqzpsYPFO4w","version":{"created":"6081199"}}}}}

This is the allocation plan for that index -

{
  "index" : "racktest1",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2021-08-13T06:48:49.922Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "udB5tWOARZSu_rbpDCQL_w",
      "node_name" : "17.1.4.28",
      "transport_address" : "17.1.4.28:9300",
      "node_attributes" : {
        "rack_id" : "rack1",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[racktest1][0], node[udB5tWOARZSu_rbpDCQL_w], [P], s[STARTED], a[id=OKBK4snoSN26uRClvjjFoQ]]"
        },
        {
          "decider" : "awareness",
          "decision" : "NO",
          "explanation" : "there are too many copies of the shard allocated to nodes with attribute [rack_id], there are [2] total configured shard copies for this shard id and [2] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"
        }
      ]
    },
    {
      "node_id" : "4dVAAORkTj6_V6jZ79Rfjw",
      "node_name" : "17.1.4.27",
      "transport_address" : "17.1.4.27:9300",
      "node_attributes" : {
        "rack_id" : "rack1",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 2,
      "deciders" : [
        {
          "decider" : "awareness",
          "decision" : "NO",
          "explanation" : "there are too many copies of the shard allocated to nodes with attribute [rack_id], there are [2] total configured shard copies for this shard id and [2] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"
        }
      ]
    },
    {
      "node_id" : "T3Mzh1-0RzWDEQLG7rcSew",
      "node_name" : "17.1.4.26",
      "transport_address" : "17.1.4.26:9300",
      "node_attributes" : {
        "rack_id" : "rack1",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 3,
      "deciders" : [
        {
          "decider" : "awareness",
          "decision" : "NO",
          "explanation" : "there are too many copies of the shard allocated to nodes with attribute [rack_id], there are [2] total configured shard copies for this shard id and [2] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"
        }
      ]
    },
    {
      "node_id" : "R5zPkbcXRRKCYXpDCscxKA",
      "node_name" : "17.1.4.7",
      "transport_address" : "17.1.4.7:9300",
      "node_attributes" : {
        "rack_id" : "default",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 4,
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [rack_id:\"rack1\"]"
        }
      ]
    },
    {
      "node_id" : "Jq9ziPd6RNCu84sp62Es8Q",
      "node_name" : "17.1.4.19",
      "transport_address" : "17.1.4.19:9300",
      "node_attributes" : {
        "rack_id" : "default",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 5,
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [rack_id:\"rack1\"]"
        }
      ]
    },
    {
      "node_id" : "J3wOYnrrTJm4yQuuK4iU7g",
      "node_name" : "17.1.4.24",
      "transport_address" : "17.1.4.24:9300",
      "node_attributes" : {
        "rack_id" : "default",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 6,
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [rack_id:\"rack1\"]"
        }
      ]
    }
  ]
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.