Master not discovered or elected yet, an election requires one or more nodes that have already participated as master-eligible nodes in the cluster but this node was not master-eligible the last time it joined the cluster, have discovered

Hello Team,

I had Elastic cluster with node1(Master) and node2 with 7.8.0 Elastic and Kibana working Fine on Centos Servers.

I had to format node1 so formatted it and installed a fresh Elastic and Kibaka and used the same configuration but this time wanted to make node2 as master so i set node.master as true in node2 and false in node1

When i do that I get the below error , Please help.

[2020-07-08T12:45:11,526][INFO ][o.e.t.TransportService   ] [node2] publish_address {10.255.215.204:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}, {10.255.215.204:9300}
[2020-07-08T12:45:12,428][INFO ][o.e.b.BootstrapChecks    ] [node2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-07-08T12:45:22,444][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node2] master not discovered or elected yet, an election requires one or more nodes that have already participated as master-eligible nodes in the cluster but this node was not master-eligible the last time it joined the cluster, have discovered [{node2}{4f2ZBN_ORVGXhxVlztm1-g}{diqlUpN8SS2C0JxES0a6Lw}{10.255.215.204}{10.255.215.204:9300}{dilmrt}{ml.machine_memory=134792798208, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [10.255.215.203:9300] from hosts providers and [{node2}{4f2ZBN_ORVGXhxVlztm1-g}{diqlUpN8SS2C0JxES0a6Lw}{10.255.215.204}{10.255.215.204:9300}{dilmrt}{ml.machine_memory=134792798208, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 18, last-accepted version 62231 in term 18
[2020-07-08T12:45:32,447][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node2] master not discovered or elected yet, an election requires one or more nodes that have already participated as master-eligible nodes in the cluster but this node was not master-eligible the last time it joined the cluster, have discovered [{node2}{4f2ZBN_ORVGXhxVlztm1-g}{diqlUpN8SS2C0JxES0a6Lw}{10.255.215.204}{10.255.215.204:9300}{dilmrt}{ml.machine_memory=134792798208, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [10.255.215.203:9300] from hosts providers and [{node2}{4f2ZBN_ORVGXhxVlztm1-g}{diqlUpN8SS2C0JxES0a6Lw}{10.255.215.204}{10.255.215.204:9300}{dilmrt}{ml.machine_memory=134792798208, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 18, last-accepted version 62231 in term 18
[2020-07-08T12:45:42,449][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node2] master not discovered or elected yet, an election requires one or more nodes that have already participated as master-eligible nodes in the cluster but this node was not master-eligible the last time it joined the cluster, have discovered [{node2}{4f2ZBN_ORVGXhxVlztm1-g}{diqlUpN8SS2C0JxES0a6Lw}{10.255.215.204}{10.255.215.204:9300}{dilmrt}{ml.machine_memory=134792798208, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [10.255.215.203:9300] from hosts providers and [{node2}{4f2ZBN_ORVGXhxVlztm1-g}{diqlUpN8SS2C0JxES0a6Lw}{10.255.215.204}{10.255.215.204:9300}{dilmrt}{ml.machine_memory=134792798208, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 18, last-accepted version 62231 in term 18

Thanks,
Naveen Velumani.

If it's disposable data, reformat the database on node 2 as well. Node 2 remembers the old node 1 master, but the new node 1 is "different" internal id (UUID? I don't recall what it's called). If you want to keep the data, I'm not sure it's possible now.

If you lose a majority of master eligible nodes you are in trouble. If you lose all master eg legible nodes I do not think you can recover from that. I would recommend reading this and restore a snapshot into a new cluster.

1 Like

I am trying to start elastic back with out disposing the data as it would take us weeks to repopulate it.

I did lose one master eligible node, So there is no way that I can recover the data ??

So can I make node2 Independent and start it and take a snapshot. delete the data and start it back as cluster and if I restore it from snapshot will it work ??

I am not sure that is possible So will have to leave that for someone else.

What was the exact sequence of events? Was node2 master eligible and was it ever the master? From that error message, seems not.

Normally you'd have it be eligible, then stop node1 so node2 takes over as master, promotes its replicas to primary and have both the master metadata and all the index data, then you can do whatever you want with node1, but without node2 as a master and having all the index data BEFORE you stop node1, I don't see how you can keep the data - the master has all the info to manage, find, etc. the data and if you lost it, well, you lost it.

Note also from that small cluster doc it recommends your setup with only one node master-eligible, which of course causes total cluster loss if you lose the master (not good), so they later recommend a 3rd voting-only node; essentially they make it nearly impossible to run a two node system as part of the new cluster voting system (which sucks for cloud regions with only two AZs).
https://www.elastic.co/guide/en/elasticsearch/reference/current/high-availability-cluster-small-clusters.html

No, you need to restore from a snapshot taken before the problems started. You can't "make node2 independent" so you can't take a snapshot now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.