Master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster

I'm trying to upgrade from 6.7.1 to 7.0.0 but getting the below error. I changed to discovery.seed_hosts and added also cluster.initial_master_nodes pointing to the same master nodes. What am I missing for this to come up?

curl -XGET "localhost:9200/_cat/health?v&pretty"
"error" : {
"root_cause" : [
"type" : "master_not_discovered_exception",
"reason" : null
"type" : "master_not_discovered_exception",
"reason" : null
"status" : 503

[WARN ][o.e.c.c.ClusterFormationFailureHelper] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes


Can you share the elasticsearch.yml file from one of your master nodes?

Also the full warning message from o.e.c.c.ClusterFormationFailureHelper (everything, from the timestamp at the start onwards) would be useful to see. You've truncated all the important bits out of your OP!

I have the same issue when migrating from 6.7.0 to 7.0.0

I had the same.
Put in the following into your elasticsearch.yml of your master node.
- node_name_or_ip

Start the first node and then the rest of the nodes.


I am running on kubernetes, node hostname or IP are not known in advance

Sorry then I wouldn't know how to solve this. This was on a regular host, no experience (yet) with Kubernetes.

In this situation you should use the node names, i.e. the values of, not the hostnames or IP addresses.

1 Like

In case of kubernetes, the is same as hostname. as it is name of the pod auto generated by kubernetes. Cannot be known in advance.

[2019-04-11T11:16:43,126][WARN ][o.e.d.SeedHostsResolver ] [elasticsearch-master-1-vl2l6] failed to resolve host [elasticsearch-master-site1.svc.cluster.local] elasticsearch-master-site1-rta3.svc.cluster.local

at ~[?:1.8.0_202]

at ~[?:1.8.0_202]

at ~[?:1.8.0_202]

at org.elasticsearch.transport.TcpTransport.parse( ~[elasticsearch-7.0.0.jar:7.0.0]

at org.elasticsearch.transport.TcpTransport.addressesFromString( ~[elasticsearch-7.0.0.jar:7.0.0]

at org.elasticsearch.transport.TransportService.addressesFromString( ~[elasticsearch-7.0.0.jar:7.0.0]

at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0( ~[elasticsearch-7.0.0.jar:7.0.0]

at ~[?:1.8.0_202]

at org.elasticsearch.common.util.concurrent.ThreadContext$ ~[elasticsearch-7.0.0.jar:7.0.0]

at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_202]

at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_202]

at [?:1.8.0_202]

While everything worked with version 6.7.0 with same setup

That's the default for but you should be overriding it. See for instance how the Helm chart does it.

1 Like

I am doing it already to set es to of pod

Ok, so can you do what the Helm chart does to set cluster.initial_master_nodes too?

I cannot do it, as helm chart is using template helpers to extract that value, In my case i have simple statefulset defined. To do what helm chart is doing, i have to use helm, that means complete revamp of deployment.

If you are using a bare StatefulSet then AIUI the pod names are predictable:

Is this not the case in your deployment?

I cannot rely on statefulset predictable behavior, I will rollback to ES 6.7.0 as this seems to be a major bug in ES 7.0.0

Can you clarify what you mean by this? Do you know of situations where it doesn't behave as documented?

Here is my elasticsearch.yml file dblogging_es_dev ${HOSTNAME}
node.master: true false
node.ingest: false
node.max_local_storage_nodes: 1 /data/elasticsearch
path.repo: /dbbackup/d-gp2-es46-1
path.logs: /var/log/elasticsearch site
transport.tcp.port: 9300
http.port: 9200
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 5s
discovery.seed_hosts: ["d-gp2-es46-1", "d-gp2-es46-2", "d-gp2-es46-3"]
cluster.initial_master_nodes: ["d-gp2-es46-1", "d-gp2-es46-2", "d-gp2-es46-3"]
bootstrap.memory_lock: true

1 Like

Thanks @kyle_che, can you also share the whole o.e.c.c.ClusterFormationFailureHelper message?