About Voting Only Nodes

Hi,
With voting configurations is it recommended that, when needed, a minimum of nodes are voting-only? Or is it legitimate to make all nodes voting-only?

I ask because we have clusters whose topologies vary. For example, if we add make one of the tiers voting-only it could be that this tier isn't equal to the number of deployed masters, which is what we would prefer. The current structure of our deployment makes it impractical to assign "N" nodes to be voting only.

Thx
D

You should IMHO have at most one voting-only node. Any other master nodes need to be master eligible so they can become master.

No. The voting-only node is as far as I know designed to be used as a single tie-breaker when you have a cluster deployed across 2 zones. If you have more than 2 zones I do not think you should use a voting-only node at all.

The docs describe the recommended practice here (supporting what Christian just said):

However, it is good practice to limit the number of master-eligible nodes in the cluster to three. [...] You may configure one of your master-eligible nodes to be a voting-only node so that it can never be elected as the master node.

1 Like

If you want to support different topologies, might it be worth creating a base cluster of 3 dedicated master nodes, out of which one can be voting-only if you only have 2 zones, to which you then add other non-master eligible nodes as appropriate (you do need at least one data node)?

Is there a limit that we can reach? How about clusters with XYZ nodes? Should we still use only three master-eligible nodes?

Yes. Only one of the master-eligible nodes will at any point act as master. The others are there for resiliency. For larger clusters the size/resources of the dedicated master nodes may need to increase, but the number of master nodes does not scale based on cluster size.

My question was not how man master nodes can be in a cluster at once but if there is a limit of

nodes.
For a master-eligible node in elasticsearch.yml I have to set:

master.node: true

and is there a limit of nodes that can be configured like this?
Example:
Cluster made of 300 nodes. I want to use 50 as master-eligible. Will I be able to do this?

The problem we have is that we have processes which run occasionally in sit which simulate losing an az, so we lose a master (we have three). The asg then starts firing up a new instance but then the az is restored and so the asg kills one of the other masters to rebalance the azs. That then kills the cluster.

If we were to have 1 x voting only node then that would be insufficient in the case where we lose the az which contains that voting only node. So, I'm thinking we need at least three?

Also, how quickly does cluster.auto_shrink_voting_configuration take effect when masters are dropping out of the cluster vs new masters joining?

That sounds problematic. The master-eligible nodes should be static and brought back. You should not spin up and add a new master eligible node just to later romove one of them again. This may have worked in older versions, but from version 7 onwards there are more stringent checks in place to improve resiliency that may not be OK with in quick succesion adding and removing master eligible nodes.

I would recommend you change this process.

1 Like

I see what you mean. I'll take this up internally. Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.