Is there a way to choose the other master eligible node as master automatically if the elected master node goes down (in unexpected sceanrio's)?


We are facing some issues while upgrading ES from 5.5.2 -> 7.8.1.
We have two nodes deployment setup. ES is installed in both nodes and both are master eligible.
In earlier version we used to have two master nodes. When one node goes down,
and the other node chosen to be a master automatically.
But in 7.8.1, this is not happening automatically, getting below issue in the ES.logs.


[node-2] master not discovered or elected yet, an election requires a node with id
[GErKuIS3Q2i2qHWriSHphA], have discovered [{node-2}{GdjI2u5fRuawVOh-FiHT7g}
{LBnzTA94RZ6SDrYPKiM9bQ}{}{Ip_node_2:9300}{dimr}] which is not a quorum;
discovery will continue using [Ip_node_1:9300] from hosts providers and [{node-2}
{dimr}] from last-known cluster state; node term 9, last-accepted version 193 in term 9

ID of node 1 = GErKuIS3Q2i2qHWriSHphA
ID of node 2 = GdjI2u5fRuawVOh-FiHT7g

below is the es.yml file config:
discovery.seed_hosts: ["node-1","node-2"]
cluster.initial_master_nodes: ["node-1","node-2"]

We are able form a cluster with these two nodes defined here and could see the data is replicated between these nodes.
We have not set "node.master" explicitly. as by default this is true.

would like to get some ideas on this issue. What changes do i need, to address this issue ?

Also we saw some voting exclusion API to exclude a node from voting process.
POST "/_cluster/voting_config_exclusions?node_names=node-1"
To configure this property, both nodes should be up and available.
In sceanrio like unexpected crash, hardware failure the node-1 will not be available
(which is chosen as master as per the initial election process).

Once, a master node is down is there a way to configure these properties ?

how to choose the other master eligible node as master automatically
(if the master chosen node is not available)? is there any configuration available in ES?

Welcome to our community! :smiley:
Can you please edit your post and remove the code formatting from the text parts, it's extremely hard to read as it is.

A two node cluster can not be highly available and if one of the two master nodes went down the remaining node should not be able to become master. This is done to protect from split brain scenarios and data loss. If your older cluster allowed this it was misconfigured.

1 Like

it is removed, can you check this now ?

Hi Mark,
any updates/suggestions for this issue..

For a cluster to be highly available you need at least three master eligible nodes. With just two nodes any node failure will result in no master being elected, which is the correct behaviour.

@Christian_Dahlqvist, @warkolm
As most of our customers are using two node deployment we would need to make it work with two nodes setup... As this is supported in earlier ES 5.5.2 version, we need similar behavior in 7.8 also... irrespective of the data impact, is there any way to achieve this ?

If this worked earlier it is because your clusters are misconfigured and therefore silently can lose data. Version 7 of Elasticsearch has made improvements in order to prevent this type of misconfiguration, so your old behaviour is no longer possible. As far as I know there is no way around it. You therefore need to add a third mode, but note that this can be a quite small voting only node used to break any deadlock.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.