I have just upgraded my Elasticsearch from 8.3.3 to 8.8.0. Previously, I have 3 master nodes in this cluster, but I want to only have one master node after the upgrade (for testing purposes). However, after the upgrade, I get the message
master not discovered or elected yet, an election requires at least 2 nodes with ids from [..., ..., ...], have only discovered non-quorum [node-1]... from last-known cluster state; node term 59, last-accepted version 160791 in term 59;...
Now it keeps trying to find the other nodes.
I've already commented out discovery.seed_hosts, and set cluster.initial_master_nodes: ["node-1"], which both used to include all 3 nodes.
How do I make ES let me configure only 1 master node?
IMPORTANT: After the cluster has formed, remove the cluster.initial_master_nodes setting from each node’s configuration and never set it again for this cluster.
These docs describe the process to use to remove master-eligible nodes from the cluster.
Thanks, I am able to start ES and see it on Kibana. But I'm trying to do POST /_cluster/voting_config_exclusions?node_names=node-2 on Kibana's Dev Tools page, and got the output Request failed to get to the server (status code: 200).
Running GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions, I do see node-2 in the list. Does this mean I can safely shut down node-2 now?
I don't know where the message Request failed to get to the server is coming from - it's definitely not a message from Elasticsearch anyway. ES returned status code: 200 which means success.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.