Elasticsearch configuration

Hi ,
I am trying to configure Elasticsearch 7.0.0 with 2 master and 1 data nodes (will add more data nodes later). Below are the configuration details for each node. But when I ran the API (curl -X GET "rw-elkm2-vm:9200/_cat/nodes?v=true&pretty") to get all the nodes info I am getting only single nodes in o/p:

ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1            0          60   0    0.16    0.15     0.13 m         *      rw-elkm2-vm

Configuration details:

master hosts: rw-elk1m-vm , rw-elk2m-vm
data nodes: rw-elkd1-vm

Here are the conf file for master1:

cluster.name: elk-cluster
node.name: rw-elkm1-vm
node.attr.zone: 1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["rw-elkm1-vm", "rw-elkm2-vm"]
cluster.initial_master_nodes: ["rw-elkm1-vm","rw-elkm2-vm"]
node.master: true
node.data: false
node.ingest: false

=============================================================

Master2:

cluster.name: elk-cluster
node.name: rw-elkm2-vm
node.attr.zone: 2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.host: 0.0.0.0

http.port: 9200
discovery.seed_hosts: ["rw-elkm1-vm", "rw-elkm2-vm"]
cluster.initial_master_nodes: ["rw-elkm1-vm","rw-elkm2-vm"]
node.master: true
node.data: false
node.ingest: false

==============================================================
Data1:

cluster.name: elk-cluster
node.name: rw-elkd1-vm
node.attr.zone: 1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["rw-elkm1-vm", "rw-elkm2-vm"]
cluster.initial_master_nodes: ["rw-elkm1-vm", "rw-elkm2-vm"]
node.master: false
node.data: true
node.ingest: true

Can someone pls help me

I worked on ES version 6 in past and there it shows all the nodes details. But don;t know why it's missing here

Are they reachable from each other ?

Yes. Even I have removed the "http.host" line and added actual IP address of hosts in "network.host" but no luck.

From master 1 to master 2:

[root@rw-elkm1-vm nipandey]# ping rw-elkm2-vm

2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.268/0.313/0.359/0.048 ms

from master2 to master 1:

[root@rw-elkm2-vm nipandey]# ping rw-elkm1-vm

2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.262/0.282/0.302/0.020 ms

This has to be set as a reachable ip address for it to be reachable by other hosts on the network.

# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:

Yes I saw it later. Changed it with actual IP but no luck. Here are the changes.
All these three hosts are reachable from each other.

master1:

cluster.name: elk-cluster
node.name: rw-elkm1-vm
node.attr.zone: 1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.102.85.117
http.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["rw-elkm1-vm", "rw-elkm2-vm"]
cluster.initial_master_nodes: ["rw-elkm1-vm","rw-elkm2-vm"]
node.master: true
node.data: false
node.ingest: false

master2:

cluster.name: elk-cluster
node.name: rw-elkm2-vm
node.attr.zone: 2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.102.86.119
http.host: 0.0.0.0

http.port: 9200
discovery.seed_hosts: ["rw-elkm1-vm", "rw-elkm2-vm"]
cluster.initial_master_nodes: ["rw-elkm1-vm","rw-elkm2-vm"]

node.master: true
node.data: false
node.ingest: false

data 1:


cluster.name: elk-cluster
node.name: rw-elkd1-vm
node.attr.zone: 1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.102.85.245
http.port: 9200
discovery.seed_hosts: ["rw-elkm1-vm", "rw-elkm2-vm"]
cluster.initial_master_nodes: ["rw-elkm1-vm", "rw-elkm2-vm"]
node.master: false
node.data: true
node.ingest: true

If you comment these out ?

Tried but getting following error

the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

and if you use ip addresses instead of host names ? Those don't seem to be FQDN

Nope still showing only single node:

curl -X GET "rw-elkm2-vm:9200/_cat/nodes?v=true&pretty"

ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.102.86.119            1          60   4    0.11    0.23     0.22 m         *      rw-elkm2-vm

These are IP addresses ?

Yes converted these to IP addresses.

cluster.name: elk-cluster
node.name: rw-elkm1-vm
node.attr.zone: 1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.102.85.117
http.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.102.85.117", "10.102.86.119"]
cluster.initial_master_nodes: ["10.102.85.117", "10.102.86.119"]
node.master: true
node.data: false
node.ingest: false

Keeping this consistent for now to see ?

Are you able to log into Kibana to see the Stack Management to check the shard allocations.

Also what are the results from the all the three nodes?

Kibana page says:

Kibana server is not ready yet

While Kibana server's /var/log/messages says:

Oct 12 06:12:01 orw-kibana-vm kibana: {"type":"log","@timestamp":"2021-10-12T13:12:01Z","tags":["warning","stats-collection"],"pid":12777,"message":"Unable to fetch data from visualization_types collector"}
Oct 12 06:12:03 orw-kibana-vm kibana: {"type":"log","@timestamp":"2021-10-12T13:12:03Z","tags":["error","task_manager"],"pid":12777,"message":"Failed to poll for work: [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana_task_manager/_search\",\"query\":{\"ignore_unavailable\":true},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"type\\\":\\\"task\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"terms\\\":{\\\"task.taskType\\\":[\\\"maps_telemetry\\\",\\\"vis_telemetry\\\"]}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lte\\\":3}}},{\\\"range\\\":{\\\"task.runAt\\\":{\\\"lte\\\":\\\"now\\\"}}},{\\\"range\\\":{\\\"kibana.apiVersion\\\":{\\\"lte\\\":1}}}]}}]}},\\\"size\\\":10,\\\"sort\\\":{\\\"task.runAt\\\":{\\\"order\\\":\\\"asc\\\"}},\\\"seq_no_primary_term\\\":true}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}"}

Kibana configuration file:

server.port: 5601

server.host: "orw-kibana-vm"

server.name: "orw-kibana-vm"

elasticsearch.hosts: ["http://10.102.86.119:9200"]

You nodes configuration seems ok, if they are not connecting you need to check the logs of both nodes, without the logs is not possible to know what is happening.

Restart the service in both nodes and share the logs.

Master 1:


[2021-10-12T07:18:09,975][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [ingest-common]
[2021-10-12T07:18:09,975][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [ingest-geoip]
[2021-10-12T07:18:09,975][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [ingest-user-agent]
[2021-10-12T07:18:09,976][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [lang-expression]
[2021-10-12T07:18:09,976][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [lang-mustache]
[2021-10-12T07:18:09,976][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [lang-painless]
[2021-10-12T07:18:09,976][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [mapper-extras]
[2021-10-12T07:18:09,976][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [parent-join]
[2021-10-12T07:18:09,977][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [percolator]
[2021-10-12T07:18:09,977][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [rank-eval]
[2021-10-12T07:18:09,977][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [reindex]
[2021-10-12T07:18:09,977][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [repository-url]
[2021-10-12T07:18:09,977][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [transport-netty4]
[2021-10-12T07:18:09,978][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-ccr]
[2021-10-12T07:18:09,978][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-core]
[2021-10-12T07:18:09,978][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-deprecation]
[2021-10-12T07:18:09,978][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-graph]
[2021-10-12T07:18:09,979][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-ilm]
[2021-10-12T07:18:09,979][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-logstash]
[2021-10-12T07:18:09,979][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-ml]
[2021-10-12T07:18:09,979][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-monitoring]
[2021-10-12T07:18:09,979][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-rollup]
[2021-10-12T07:18:09,980][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-security]
[2021-10-12T07:18:09,980][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-sql]
[2021-10-12T07:18:09,980][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] loaded module [x-pack-watcher]
[2021-10-12T07:18:09,981][INFO ][o.e.p.PluginsService     ] [orw-elkm1-vm] no plugins loaded
[2021-10-12T07:18:14,006][INFO ][o.e.x.s.a.s.FileRolesStore] [orw-elkm1-vm] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2021-10-12T07:18:15,082][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [orw-elkm1-vm] [controller/8525] [Main.cc@109] controller (64 bit): Version 7.0.0 (Build cdaa022645f38d) Copyright (c) 2019 Elasticsearch BV
[2021-10-12T07:18:15,527][DEBUG][o.e.a.ActionModule       ] [orw-elkm1-vm] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2021-10-12T07:18:15,880][INFO ][o.e.d.DiscoveryModule    ] [orw-elkm1-vm] using discovery type [zen] and seed hosts providers [settings]
[2021-10-12T07:18:16,793][INFO ][o.e.n.Node               ] [orw-elkm1-vm] initialized
[2021-10-12T07:18:16,793][INFO ][o.e.n.Node               ] [orw-elkm1-vm] starting ...
[2021-10-12T07:18:16,939][INFO ][o.e.t.TransportService   ] [orw-elkm1-vm] publish_address {10.102.85.117:9300}, bound_addresses {10.102.85.117:9300}
[2021-10-12T07:18:16,947][INFO ][o.e.b.BootstrapChecks    ] [orw-elkm1-vm] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-10-12T07:18:17,086][INFO ][o.e.c.s.MasterService    ] [orw-elkm1-vm] elected-as-master ([1] nodes joined)[{orw-elkm1-vm}{_2E98E7iRYyuAffWMAI7hg}{GQzmK66hSuWrzpub-FUVdQ}{10.102.85.117}{10.102.85.117:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=1, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 48, reason: master node changed {previous [], current [{orw-elkm1-vm}{_2E98E7iRYyuAffWMAI7hg}{GQzmK66hSuWrzpub-FUVdQ}{10.102.85.117}{10.102.85.117:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=1, ml.max_open_jobs=20}]}
[2021-10-12T07:18:17,426][INFO ][o.e.c.s.ClusterApplierService] [orw-elkm1-vm] master node changed {previous [], current [{orw-elkm1-vm}{_2E98E7iRYyuAffWMAI7hg}{GQzmK66hSuWrzpub-FUVdQ}{10.102.85.117}{10.102.85.117:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=1, ml.max_open_jobs=20}]}, term: 7, version: 48, reason: Publication{term=7, version=48}
[2021-10-12T07:18:17,482][INFO ][o.e.h.AbstractHttpServerTransport] [orw-elkm1-vm] publish_address {10.102.85.117:9200}, bound_addresses {[::]:9200}
[2021-10-12T07:18:17,483][INFO ][o.e.n.Node               ] [orw-elkm1-vm] started
[2021-10-12T07:18:17,669][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [orw-elkm1-vm] Failed to clear cache for realms [[]]
[2021-10-12T07:18:17,702][INFO ][o.e.l.LicenseService     ] [orw-elkm1-vm] license [23d1d6a7-f36b-4586-b935-fae561f80375] mode [basic] - valid
[2021-10-12T07:18:17,712][INFO ][o.e.g.GatewayService     ] [orw-elkm1-vm] recovered [3] indices into cluster_state
[2021-10-12T07:18:17,721][INFO ][o.e.c.s.MasterService    ] [orw-elkm1-vm] node-join[{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{ZQVwF8_NQxC-4RY8_Ubecg}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1} join existing leader], term: 7, version: 50, reason: added {{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{ZQVwF8_NQxC-4RY8_Ubecg}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}
[2021-10-12T07:18:17,855][INFO ][o.e.c.s.ClusterApplierService] [orw-elkm1-vm] added {{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{ZQVwF8_NQxC-4RY8_Ubecg}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}, term: 7, version: 50, reason: Publication{term=7, version=50}
[2021-10-12T07:18:18,257][INFO ][o.e.c.r.a.AllocationService] [orw-elkm1-vm] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]] ...]).
[2021-10-12T07:27:23,036][INFO ][o.e.c.s.MasterService    ] [orw-elkm1-vm] node-left[{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{ZQVwF8_NQxC-4RY8_Ubecg}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1} disconnected], term: 7, version: 55, reason: removed {{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{ZQVwF8_NQxC-4RY8_Ubecg}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}
[2021-10-12T07:27:23,052][INFO ][o.e.c.s.ClusterApplierService] [orw-elkm1-vm] removed {{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{ZQVwF8_NQxC-4RY8_Ubecg}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}, term: 7, version: 55, reason: Publication{term=7, version=55}
[2021-10-12T07:27:23,057][INFO ][o.e.c.r.DelayedAllocationService] [orw-elkm1-vm] scheduling reroute for delayed shards in [59.9s] (3 delayed shards)
[2021-10-12T07:27:46,618][INFO ][o.e.c.s.MasterService    ] [orw-elkm1-vm] node-join[{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{Zfxg5W0MR6KIhgXau4fozw}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1} join existing leader], term: 7, version: 56, reason: added {{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{Zfxg5W0MR6KIhgXau4fozw}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}
[2021-10-12T07:27:47,145][INFO ][o.e.c.s.ClusterApplierService] [orw-elkm1-vm] added {{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{Zfxg5W0MR6KIhgXau4fozw}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}, term: 7, version: 56, reason: Publication{term=7, version=56}
[2021-10-12T07:27:47,488][INFO ][o.e.c.r.a.AllocationService] [orw-elkm1-vm] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0], [.kibana_task_manager][0]] ...]).

Master2:

[2021-10-12T07:23:39,515][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [lang-painless]
[2021-10-12T07:23:39,515][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [mapper-extras]
[2021-10-12T07:23:39,515][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [parent-join]
[2021-10-12T07:23:39,516][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [percolator]
[2021-10-12T07:23:39,516][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [rank-eval]
[2021-10-12T07:23:39,516][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [reindex]
[2021-10-12T07:23:39,516][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [repository-url]
[2021-10-12T07:23:39,516][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [transport-netty4]
[2021-10-12T07:23:39,517][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-ccr]
[2021-10-12T07:23:39,517][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-core]
[2021-10-12T07:23:39,517][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-deprecation]
[2021-10-12T07:23:39,517][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-graph]
[2021-10-12T07:23:39,517][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-ilm]
[2021-10-12T07:23:39,518][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-logstash]
[2021-10-12T07:23:39,518][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-ml]
[2021-10-12T07:23:39,518][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-monitoring]
[2021-10-12T07:23:39,518][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-rollup]
[2021-10-12T07:23:39,519][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-security]
[2021-10-12T07:23:39,519][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-sql]
[2021-10-12T07:23:39,519][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] loaded module [x-pack-watcher]
[2021-10-12T07:23:39,520][INFO ][o.e.p.PluginsService     ] [orw-elkm2-vm] no plugins loaded
[2021-10-12T07:23:43,404][INFO ][o.e.x.s.a.s.FileRolesStore] [orw-elkm2-vm] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2021-10-12T07:23:44,379][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [orw-elkm2-vm] [controller/5708] [Main.cc@109] controller (64 bit): Version 7.0.0 (Build cdaa022645f38d) Copyright (c) 2019 Elasticsearch BV
[2021-10-12T07:23:44,812][DEBUG][o.e.a.ActionModule       ] [orw-elkm2-vm] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2021-10-12T07:23:45,174][INFO ][o.e.d.DiscoveryModule    ] [orw-elkm2-vm] using discovery type [zen] and seed hosts providers [settings]
[2021-10-12T07:23:46,134][INFO ][o.e.n.Node               ] [orw-elkm2-vm] initialized
[2021-10-12T07:23:46,134][INFO ][o.e.n.Node               ] [orw-elkm2-vm] starting ...
[2021-10-12T07:23:46,284][INFO ][o.e.t.TransportService   ] [orw-elkm2-vm] publish_address {10.102.86.119:9300}, bound_addresses {10.102.86.119:9300}
[2021-10-12T07:23:46,293][INFO ][o.e.b.BootstrapChecks    ] [orw-elkm2-vm] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-10-12T07:23:46,435][INFO ][o.e.c.s.MasterService    ] [orw-elkm2-vm] elected-as-master ([1] nodes joined)[{orw-elkm2-vm}{rjduCbD1TJOR12vW3yvCXg}{wCbjwENwTIGlfx4ID45m_w}{10.102.86.119}{10.102.86.119:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=2, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 101, reason: master node changed {previous [], current [{orw-elkm2-vm}{rjduCbD1TJOR12vW3yvCXg}{wCbjwENwTIGlfx4ID45m_w}{10.102.86.119}{10.102.86.119:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=2, ml.max_open_jobs=20}]}
[2021-10-12T07:23:46,734][INFO ][o.e.c.s.ClusterApplierService] [orw-elkm2-vm] master node changed {previous [], current [{orw-elkm2-vm}{rjduCbD1TJOR12vW3yvCXg}{wCbjwENwTIGlfx4ID45m_w}{10.102.86.119}{10.102.86.119:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=2, ml.max_open_jobs=20}]}, term: 7, version: 101, reason: Publication{term=7, version=101}
[2021-10-12T07:23:46,773][INFO ][o.e.h.AbstractHttpServerTransport] [orw-elkm2-vm] publish_address {10.102.86.119:9200}, bound_addresses {[::]:9200}
[2021-10-12T07:23:46,774][INFO ][o.e.n.Node               ] [orw-elkm2-vm] started
[2021-10-12T07:23:46,970][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [orw-elkm2-vm] Failed to clear cache for realms [[]]
[2021-10-12T07:23:47,001][INFO ][o.e.l.LicenseService     ] [orw-elkm2-vm] license [2bdeff11-be48-421d-9ef1-791b8947442f] mode [basic] - valid
[2021-10-12T07:23:47,012][INFO ][o.e.g.GatewayService     ] [orw-elkm2-vm] recovered [2] indices into cluster_state
[2021-10-12T07:27:46,592][WARN ][o.e.c.c.Coordinator      ] [orw-elkm2-vm] failed to validate incoming join request from node [{orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{Zfxg5W0MR6KIhgXau4fozw}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1}]
org.elasticsearch.transport.RemoteTransportException: [orw-elkd1-vm][10.102.85.245:9300][internal:cluster/coordination/join/validate]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid Bp1EZ69WSFq8IR4DFe8nUw than local cluster uuid WwcRVF99Qmu4J2azujyM9g, rejecting
        at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$4(JoinHelper.java:147) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:251) ~[?:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:309) ~[?:?]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1077) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]

Data 1:

 at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport.lambda$handleException$24(TcpTransport.java:1001) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [orw-elkd1-vm][10.102.85.245:9300][internal:cluster/coordination/join/validate]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid Bp1EZ69WSFq8IR4DFe8nUw than local cluster uuid WwcRVF99Qmu4J2azujyM9g, rejecting
        at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$4(JoinHelper.java:147) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:251) ~[?:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:309) ~[?:?]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1077) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) ~[?:?]
[2021-10-12T07:27:46,661][INFO ][o.e.c.s.ClusterApplierService] [orw-elkd1-vm] master node changed {previous [], current [{orw-elkm1-vm}{_2E98E7iRYyuAffWMAI7hg}{GQzmK66hSuWrzpub-FUVdQ}{10.102.85.117}{10.102.85.117:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1}]}, added {{orw-elkm1-vm}{_2E98E7iRYyuAffWMAI7hg}{GQzmK66hSuWrzpub-FUVdQ}{10.102.85.117}{10.102.85.117:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1},}, term: 7, version: 56, reason: ApplyCommitRequest{term=7, version=56, sourceNode={orw-elkm1-vm}{_2E98E7iRYyuAffWMAI7hg}{GQzmK66hSuWrzpub-FUVdQ}{10.102.85.117}{10.102.85.117:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=1}}
[2021-10-12T07:27:46,672][INFO ][o.e.c.c.JoinHelper       ] [orw-elkd1-vm] failed to join {orw-elkm2-vm}{rjduCbD1TJOR12vW3yvCXg}{wCbjwENwTIGlfx4ID45m_w}{10.102.86.119}{10.102.86.119:9300}{ml.machine_memory=67388039168, ml.max_open_jobs=20, xpack.installed=true, zone=2} with JoinRequest{sourceNode={orw-elkd1-vm}{9KSQd4TET-epF3fv9TDmPg}{Zfxg5W0MR6KIhgXau4fozw}{10.102.85.245}{10.102.85.245:9300}{ml.machine_memory=67388039168, xpack.installed=true, zone=1, ml.max_open_jobs=20}, optionalJoin=Optional.empty}
org.elasticsearch.transport.RemoteTransportException: [orw-elkm2-vm][10.102.86.119:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalStateException: failure when sending a validation request to node
        at org.elasticsearch.cluster.coordination.Coordinator$3.onFailure(Coordinator.java:500) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.cluster.coordination.JoinHelper$5.handleException(JoinHelper.java:351) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport.lambda$handleException$24(TcpTransport.java:1001) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [orw-elkd1-vm][10.102.85.245:9300][internal:cluster/coordination/join/validate]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid Bp1EZ69WSFq8IR4DFe8nUw than local cluster uuid WwcRVF99Qmu4J2azujyM9g, rejecting
        at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$4(JoinHelper.java:147) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:251) ~[?:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:309) ~[?:?]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1077) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) ~[?:?]
[2021-10-12T07:27:46,847][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [orw-elkd1-vm] Failed to clear cache for realms [[]]
[2021-10-12T07:27:46,849][INFO ][o.e.x.s.a.TokenService   ] [orw-elkd1-vm] refresh keys
[2021-10-12T07:27:47,070][INFO ][o.e.x.s.a.TokenService   ] [orw-elkd1-vm] refreshed keys
[2021-10-12T07:27:47,098][INFO ][o.e.l.LicenseService     ] [orw-elkd1-vm] license [23d1d6a7-f36b-4586-b935-fae561f80375] mode [basic] - valid
[2021-10-12T07:27:47,131][INFO ][o.e.h.AbstractHttpServerTransport] [orw-elkd1-vm] publish_address {10.102.85.245:9200}, bound_addresses {[::]:9200}
[2021-10-12T07:27:47,132][INFO ][o.e.n.Node               ] [orw-elkd1-vm] started