Setting zen.discovery.minimum_master_nodes
to a value higher than the current node count effectively leaves the cluster without a master and unable to process requests.
The official website below has fixed this bug.
https://www.elastic.co/guide/en/elasticsearch/resiliency/current/index.html
The official version 1.5.0 fixed this defect by adding verification, which does not solve this problem in my opinion. When a node in the cluster exits due to a crash, the cluster still cannot elect a new master. I verified this on version 1.5.0. Set discovery.zen.minimum_master_nodes:3
, start a cluster of 3 nodes and run normally, shut down one node to simulate its crash, and then poll another node to access its status, the results are as follows:
"status" : 503,
"name" : "node-0",
"cluster_name" : "es-clusters",
"version" : {
"number" : "1.5.0",
"build_hash" : "581bcd3deec6302b0a0a6ac3935556400efbce15",
"build_timestamp" : "2023-02-27T02:46:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
Obviously the cluster cannot elect a new master, I can see not enough master nodes
in es-clusters.log.
In addition, when discovery.zen.minimum_master_nodes > 3
is set when starting the cluster, the cluster cannot start normally, but there is no feedback error in the log, but print started
.