Elastic search 7.0 cluster master node handover problem

[2019-05-14T17:29:05,931][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1-search] master not discovered or elected yet, an election requires a node with id [oTcvvEb8R2G73xLl0U3srQ], have discovered which is not a quorum; discovery will continue using [110.242.49.24:9300] from hosts providers and [{node-2-master}{oTcvvEb8R2G73xLl0U3srQ}{UW5CKZZSRJqZJHdu4_XqdQ}{110.242.49.24}{110.242.49.24:9300}{ml.machine_memory=67530166272, ml.max_open_jobs=20, xpack.installed=true}, {node-1-search}{Q7fHLH70RiKiYp_QPKxcgg}{BU0h0J0MTA-zwA7Ka2zPQw}{110.242.49.23}{110.242.49.23:9300}{ml.machine_memory=67522936832, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 5, last-accepted version 191 in term 5

[2019-05-14T17:29:06,144][DEBUG][o.e.a.a.c.n.i.TransportNodesInfoAction] [node-1-search] failed to execute on node [oTcvvEb8R2G73xLl0U3srQ]
org.elasticsearch.transport.NodeNotConnectedException: [node-2-master][110.242.49.24:9300] Node not connected

I tried to stop the node-2-master master master service, but the node-1-search node did not take over.

cluster.name: qiniu-ELK
node.name: node-2-master
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 110.242.49.24
http.port: 9200
discovery.seed_hosts: ["110.242.49.23","110.242.49.24"]
cluster.initial_master_nodes: ["110.242.49.23", "110.242.49.24"]
gateway.recover_after_nodes: 1
indices.query.bool.max_clause_count: 8192
search.max_buckets: 100000
discovery.zen.minimum_master_nodes: 1

main node configuration

cluster.name: qiniu-ELK
node.name: node-1-search
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 110.242.49.23
http.port: 9200
discovery.seed_hosts: ["110.242.49.23", "110.242.49.24"]
cluster.initial_master_nodes: ["110.242.49.23", "110.242.49.24"]
gateway.recover_after_nodes: 1
indices.query.bool.max_clause_count: 8192
search.max_buckets: 100000
discovery.zen.minimum_master_nodes: 1

data node configuration

This setting is ignored in version 7, but if you were using this setting in an earlier version in a cluster with 2 master-eligible nodes then you were at serious risk of data loss.

This blog post has more details about the changes in version 7, specifically:

It is also safe to remove nodes simply by stopping them as long as you do not stop half or more of the master-eligible nodes all at once .

In this case, you have removed half of the master-eligible nodes all at once (i.e. one of the two nodes) so the remaining node does not form a majority.

Hello

You mean I need more than three, don't you?

If you want the cluster to carry on working even if you shut down one of the nodes then you need at least three master-eligible nodes.

Okay, thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.