[2024-11-30T10:19:13,234][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-3] master not discovered yet: have discovered [{node-3}{CpTTLncQTpq2jmzKCp6BfQ}{OdGCenkKT52GKc75CphmxQ}{10.0.0.6}{10.0.0.6:9300}{cdhilrstw}{ml.machine_memory=8271114240, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, {node-1}{b3lPF74ZTO-wg9eIjgTuSA}{ZjQP7vf7TOSFTLiwiQzC-Q}{10.0.0.4}{10.0.0.4:9300}{cdhilmrstw}{ml.machine_memory=8271114240, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}]; discovery will continue using [10.0.0.4:9300, 10.0.0.5:9300] from hosts providers and [{node-2}{Id07aq88SF-nclMGG-sOww}{PbX19mFtSxWwXCM9MMCs9w}{10.0.0.5}{10.0.0.5:9300}{cdhilmrstw}{ml.machine_memory=8271110144, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}, {node-1}{b3lPF74ZTO-wg9eIjgTuSA}{ZjQP7vf7TOSFTLiwiQzC-Q}{10.0.0.4}{10.0.0.4:9300}{cdhilmrstw}{ml.machine_memory=8271114240, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}] from last-known cluster state; node term 35, last-accepted version 384 in term 35
node-1 is master eligible
node-2 is master
node-2 is not master
when i turn off , the expected behaviour should be to elect a new master , node-1
but this is not happening
Which version of Elasticsearch are you using? It must be quite old if the configuration settings are correct. I would recommend that you upgrade to the latest version.
I am not sure I follow this. Do you have 3 nodes that are all master eligible (that is what it looks like based on the config)?
I am using 7.10.0, Initially I had 2 nodes, Both were master eligible, than I learned if you have 2 nodes and your master goes down, the other node cannot become master becuase of you need 2 nodes to elect a master, I added another node made it master eligible, with same configuration as above. Still, I was facing same problem.
than i made node-3 non-master node to check still same problem. i reverted the setting and made node-3 master node.
Now everything is working fine. Now I am confues.
I guess what happen was, when I created 2 Node, and created the cluster only 2 node were participating in election, even when I added the node-3 , until i restarded the whole clustor.
Can you please share any guidelines, Becuase this was an experiment, i need to recreate this in production. with high traffic we are migrating or atlas search index to self-hosted Elasticsearch.
i need to make it resilient, I have also added the snapshot.
Why are you using such an old version that has been EOL a long time?
That does not make sense. Did you verify that all 3 nodes were part of the cluster after you added the new node, e.g. through the cat nodes API? What does the API show? Did the logs of the new node indicate that it formed up with the cluster?
This setting is not valid or needed in Elasticsearch 7.0+.
I was using 7.10 as an experiment on production I'll use 8+.
Yes did check all of my nodes were master.
node-1 was master , node-2 and node-3 was master eligible.
Anyway I am reconfiguring it for Production with 8, as this was an experiment to check if we can migrate from atlas search to Elasticsearch.
if you guys have any suggestion please let me know
Yes,these logs were taken when node-3 was removed from master. after that i re configure it to be master, then the clustor start working , and problem was fixed
Then Elasticsearch is working as designed, if you had only 2 master-eligible nodes in the cluster and removed one, you won't have a working cluster as you would need at least 3 master eligible nodes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.