Upgrade from 6.4.3 to 7.1 erros in gce discovery

This worked fine before the upgrade now getting error:

[2019-05-24T15:21:02,699][DEBUG][o.e.a.a.c.s.TransportClusterStateAction] [rrc101-196516-elasticsearch-70test1] no known master node, scheduling a retry

cluster.name: rrc101-196516-70tst
node.name: rrc101-196516-elasticsearch-70test1
network.host: gce
node.master: true
node.data: true
node.ingest: true
path.data: /mnt1/data
path.logs: /mnt1/logs
http.port: 9200
http.max_content_length: 2147483647B
indices.memory.index_buffer_size: 30%
bootstrap.memory_lock: true
action.destructive_requires_name: true
xpack.monitoring.collection.enabled: true
cloud:
gce:
project_id: rrc101-196516
zone: ["us-east1-b", "us-east1-c", "us-east1-d"]
discovery:
zen.hosts_provider: gce
ffoti@rrc101-196516-elasticsearch-70test1:~$

Are there any other log messages? You should be seeing a warning message every 10 seconds or so too that's a bit more informative than the DEBUG message you quoted.

I think you have skipped this vital step in the full-cluster restart upgrade instructions:

If upgrading from a 6.x cluster, you must configure cluster bootstrapping by setting the cluster.initial_master_nodes setting.

I reverted to non gce using this for now. See below.

But... Our goal was to stay 100% gce "dynamic" discovery. Is that possible on the upgrade? We do not want to hard code IP's or Nodes in YML. Our install was doing that fine. We are scripting our installs and upgrades.

Thanks for you quick feedback:)

cluster.name: rrc101-196516-70tst
node.name: rrc101-196516-elasticsearch-70test1
network.host: rrc101-196516-elasticsearch-70test1
node.master: true
node.data: true
node.ingest: true
path.data: /mnt1/data
path.logs: /mnt1/logs
http.port: 9200
http.max_content_length: 2147483647B
indices.memory.index_buffer_size: 30%
bootstrap.memory_lock: true
action.destructive_requires_name: true
xpack.monitoring.collection.enabled: true
discovery.seed_hosts:

  • rrc101-196516-elasticsearch-70test1
  • rrc101-196516-elasticsearch-70test2
  • rrc101-196516-elasticsearch-70test3
    cluster.initial_master_nodes:
  • rrc101-196516-elasticsearch-70test1
  • rrc101-196516-elasticsearch-70test2
  • rrc101-196516-elasticsearch-70test3

You've got to help Elasticsearch out with the first election after the upgrade, because there's unfortunately no safe way to do this fully dynamically. Once the new cluster has formed you can drop the cluster.initial_master_nodes setting again.

Worked fine. We will add that to our process. Thanks

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.