7.0 master not discovered yet

I am having difficulty to migrate from 6.4 to 7. In QA env, I would like to form a cluster with 2 hosts psclxd00546 and psclxd00547. The error message I'm getting is:

[2019-05-14T11:38:49,908][WARN ][o.e.c.c.ClusterFormationFailureHelper] [psclxd00547] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [psclxd00546, psclxd00547] to bootstrap a cluster: have discovered ; discovery will continue using [10.41.229.125:9300, 10.41.229.126:9300] from hosts providers and [{psclxd00547}{OezZQFHxRiSXLXflu-M0_g}{TrIkeEfkTRCnHQnvSH5vQQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=101115412480, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

Below is my elasticsearch.yml on host psclxd00547.

cluster.name: my_qa
node.name: psclxd00547
node.master: true
node.data: true
node.ingest: false
cluster.remote.connect: false
node.max_local_storage_nodes: 1
discovery.seed_hosts: ["psclxd00546","psclxd00547"]
cluster.initial_master_nodes: ["psclxd00546", "psclxd00547"]

Could anyone please point out what went wrong here?

Many thanks
Jun

I managed to get it work.. well to some extent.

First I changed the initial_master_nodes on psclxd00547 to:

cluster.initial_master_nodes: ["psclxd00547"]

Afterwards I managed to start up ES on psclxd00547 OK. I then add psclxd00546 back as it is below:

cluster.initial_master_nodes: ["psclxd00546", "psclxd00547"]

Restart psclxd00547, still OK. I think once a master node is formed, this initial_master_nodes is ignored.. maybe.

I then start up ES on psclxd00546 with pretty much the same config. Now I have 1 master plus 1 node.

Question -

  • List item

What should I do if I want both as Master node?
What is the recommended setup if I only have 2 servers? 1 master or 2 masters?

Yes, that's right. From the docs:

After the cluster has formed, this setting is no longer required and is ignored.

Since you've already bootstrapped the cluster, you shouldn't be setting cluster.initial_master_nodes on the second node. You certainly shouldn't be setting it differently on the two nodes. From the same docs:

WARNING: You must set cluster.initial_master_nodes to the same list of nodes on each node on which it is set in order to be sure that only a single cluster forms during bootstrapping and therefore to avoid the risk of data loss.

Ensure that node.master is not set to false on either node. It defaults to true. One of them will be elected as the master of the cluster.

There's not much difference between them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.