Not enough nodes to allocate all shard replicas

Hey all,

My Elasticsearch deployment is showing the following health issue:

I tried to fix the issue by following the suggested guide. I scaled our deployment up by increasing the availability zones (from 1 to 2), like the guide suggested. When I did this, the health warning seemed to be resolved.

However, when I scale back down, the exact health warning returns. Should I give it some more time to resolve? I'm not sure how to fix this. I'm quite new, so any help is appreciated. If you need some more logging information, feel free to ask.

GET asset/_settings/index.routing.allocation.include._tier_preference?flat_settings
{
  "asset": {
    "settings": {
      "index": {
        "routing": {
          "allocation": {
            "include": {
              "_tier_preference": "data_content"
            }
          }
        }
      }
    }
  }
}

GET /asset/_settings/index.number_of_replicas
{
  "asset": {
    "settings": {
      "index": {
        "number_of_replicas": "1"
      }
    }
  }
}

GET /_cat/nodes?h=node.role
himrst

Hi michaelswaanen,

I think you might getting the error because Elasticsearch is trying to assign the replica shard to another instance.

You can set that to 0 I think on that index. Then you'll have the primary shard only. Elasticsearch then won't try and place the replica on another instance. Checking here can give you information on your shards is: cat shards API | Elasticsearch Guide [8.6] | Elastic

GET /_cat/shards?pretty

Can do the same with indices and it will let you know the status of the index which assets is probably yellow: cat indices API | Elasticsearch Guide [8.6] | Elastic

I hope this helps

Kind Regards,
Chris

2 Likes

Hey @chris3

Thanks for getting back to me!

The deployment is indeed yellow.

What do you mean with "You can set that to 0 I think on that index"? Could you please elaborate on this? I already tried to decrease the number of replicas to 0, however this wasn't a success.

I see!

Elasticsearch wouldn't assign the replica shard to the same instance as the primary shard. It's recommended to have 1 primary and 1 replica. The replica is a redundant shard so if the primary fails the replica is promoted to a primary and Elasticsearch creates a new replica.

Do you have a few instances on your cluster, have index lifecycle management policy that might be using tiering? Also, if you were to check the assets index from the dev console:

GET _cat/shards/assets*

It can let you know which node the replica is on at the end of the output and the shard status.

Kind Regards,
Chris

Hello @michielswaanen , welcome to the community!

Will you be able to provide the output of the command: GET _cluster/allocation/explain ? This will provide more insight as to why the shard is not being assigned to any node.

The output of GET _cluster/allocation/explain

{
  "can_allocate": "no",
  "index": "asset-cloned",
  "node_allocation_decisions": [
    {
      "node_decision": "no",
      "transport_address": "10.43.0.105:19174",
      "weight_ranking": 1,
      "node_name": "instance-0000000002",
      "node_id": "o4gGXk2ATgGBmI2OXjMyiw",
      "deciders": [
        {
          "decision": "NO",
          "explanation": "a copy of this shard is already allocated to this node [[asset-cloned][0], node[o4gGXk2ATgGBmI2OXjMyiw], [P], s[STARTED], a[id=MZLkdTt6QL2Qx24Dq8lc7Q], failed_attempts[0]]",
          "decider": "same_shard"
        }
      ],
      "node_attributes": {
        "server_name": "instance-0000000002.98ad3db23ebb480889fa7c0be5894815",
        "availability_zone": "europe-west1-b",
        "region": "unknown-region",
        "instance_configuration": "gcp.es.datahot.n2.68x10x45",
        "xpack.installed": "true",
        "logical_availability_zone": "zone-0",
        "data": "hot"
      }
    }
  ],
  "current_state": "unassigned",
  "shard": 0,

The output of GET _cat/shards/asset

asset        0 p STARTED    11589 48.4mb 10.43.0.105 instance-0000000002
asset        0 r UNASSIGNED  

It seems like you have a single node cluster, however, your index setting requires 1 primary and 1 replica shard to be initialized. This will certainly cause issues since primary and replica shards cannot reside on same node.
So, you have, for now, 2 options:

  1. Scale down the indices and set number_of_replicas=0
  2. Add atleast another node so both primary and replica shards can be allocated.

Please note, having 2 nodes cluster may lead to split-brain issue while electing the master node, hence its recommended to have odd numbered node cluster.

This is not the case in recent versions. The reason 3 master eligible nodes are recommended is that master election requires a strict majority of master eligible nodes to agree. If you have 3 nodes a master can be elected even if one node is unavailable. If you instead onlty have 2 nodes, a master can only be elected if both nodes are available and the cluster will not fully function if one node is missing.

@Christian_Dahlqvist what you mentioned is about node quorum. When it comes to master node election, let's say your have a 2 node cluster and each node vote for itself to become master, you have split-brain still. In this case, you may configure one of the node for only voting while other can have master role (by default).

No, that is incorrect. If they can communicate they will negotiate until one is elected. If they can not communicate they will not be able to reach consensus and the cluster will be split.

1 Like

Just to reinforce what Christian is saying, the statement above is indeed false. It hasn't really ever been true (unless you misconfigure something) and since 7.0 it hasn't even been possible to misconfigure things so badly that there is a risk of a split-brain. All sizes of Elasticsearch cluster are resistant to split brain situations no matter how they are configured.

2 Likes

@Christian_Dahlqvist @DavidTurner My apologies for incorrect understanding :pray: and thank you for clearing it out :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.