Nodes unable to join cluster

I have following node configurations for 3 node cluster :
Each node has two network cards, one for private IP for internode communication and another for public IP to connect from kibana/ external users.

The private IP's can't be reachable from outside than of 3 nodes.

Node-1 :
cluster.name: cad-elastisearch
node.name: node-1-gnbsx00043
path.data: /data/esdb/data
path.logs: /data/esdb/log
network.host: 10.129.212.43 <= public IP
http.host: 10.129.212.43
transport.host: 10.129.213.43 <= private IP
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.129.212.43","10.129.212.44","10.129.212.45"] <= Public IP's
discovery.zen.minimum_master_nodes: 2

Node-2
cluster.name: cad-elastisearch
node.name: node-2-gnbsx00044
path.data: /data/esdb/data
path.logs: /data/esdb/log
network.host: 10.129.212.44
http.host: 10.129.212.44
transport.host: 10.129.213.44
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.129.212.43","10.129.212.44","10.129.212.45"]
discovery.zen.minimum_master_nodes: 2

Node-3
cluster.name: cad-elastisearch
node.name: node-2-gnbsx00044
path.data: /data/esdb/data
path.logs: /data/esdb/log
network.host: 10.129.212.44
http.host: 10.129.212.44
transport.host: 10.129.213.44
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.129.212.43","10.129.212.44","10.129.212.45"]
discovery.zen.minimum_master_nodes: 2

With this configuration the error log shows

[2019-07-30T06:58:56,639][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] loaded module [x-pack-monitoring]
[2019-07-30T06:58:56,640][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] loaded module [x-pack-rollup]
[2019-07-30T06:58:56,640][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] loaded module [x-pack-security]
[2019-07-30T06:58:56,640][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] loaded module [x-pack-sql]
[2019-07-30T06:58:56,640][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] loaded module [x-pack-upgrade]
[2019-07-30T06:58:56,640][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] loaded module [x-pack-watcher]
[2019-07-30T06:58:56,640][INFO ][o.e.p.PluginsService ] [node-1-gnbsx00043] no plugins loaded
[2019-07-30T06:59:00,292][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1-gnbsx00043] parsed [0] roles from file [/data/essw/elasticsearch-6.6.0/config/roles.yml]
[2019-07-30T06:59:00,796][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1-gnbsx00043] [controller/237437] [Main.cc@109] controller (64 bit): Version 6.6.0 (Build bbb4919f4d17a5) Copyright (c) 2019 Elasticsearch BV
[2019-07-30T06:59:01,187][DEBUG][o.e.a.ActionModule ] [node-1-gnbsx00043] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-07-30T06:59:01,400][INFO ][o.e.d.DiscoveryModule ] [node-1-gnbsx00043] using discovery type [zen] and host providers [settings]
[2019-07-30T06:59:02,099][INFO ][o.e.n.Node ] [node-1-gnbsx00043] initialized
[2019-07-30T06:59:02,099][INFO ][o.e.n.Node ] [node-1-gnbsx00043] starting ...
[2019-07-30T06:59:02,237][INFO ][o.e.t.TransportService ] [node-1-gnbsx00043] publish_address {10.129.213.43:9300}, bound_addresses {10.129.213.43:9300}
[2019-07-30T06:59:02,255][INFO ][o.e.b.BootstrapChecks ] [node-1-gnbsx00043] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-07-30T06:59:05,308][WARN ][o.e.d.z.ZenDiscovery ] [node-1-gnbsx00043] not enough master nodes discovered during pinging (found [[Candidate{node={node-1-gnbsx00043}{78KaKTlkSJefU6tt3ZRPkg}{7m9ar3gEQ1OIIq1u-pFOXQ}{10.129.213.43}{10.129.213.43:9300}{ml.machine_memory=270841049088, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2019-07-30T06:59:08,309][WARN ][o.e.d.z.ZenDiscovery ] [node-1-gnbsx00043] not enough master nodes discovered during pinging (found [[Candidate{node={node-1-gnbsx00043}{78KaKTlkSJefU6tt3ZRPkg}{7m9ar3gEQ1OIIq1u-pFOXQ}{10.129.213.43}{10.129.213.43:9300}{ml.machine_memory=270841049088, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2019-07-30T06:59:11,310][WARN ][o.e.d.z.ZenDiscovery ] [node-1-gnbsx00043] not enough master nodes discovered during pinging (found [[Candidate{node={node-1-gnbsx00043}{78KaKTlkSJefU6tt3ZRPkg}{7m9ar3gEQ1OIIq1u-pFOXQ}{10.129.213.43}{10.129.213.43:9300}{ml.machine_memory=270841049088, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2019-07-30T06:59:14,311][WARN ][o.e.d.z.ZenDiscovery ] [node-1-gnbsx00043] not enough master nodes discovered during pinging (found [[Candidate{node={node-1-gnbsx00043}{78KaKTlkSJefU6tt3ZRPkg}{7m9ar3gEQ1OIIq1u-pFOXQ}{10.129.213.43}{10.129.213.43:9300}{ml.machine_memory=270841049088, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again

It seems you've given Node-2 and Node-3 the same name and configuration:

You will have to give Node-3 it's own node.name and host-information to separate it from Node-2 and allow both to join the cluster.

Sorry it was a typo during pasting in the discussion forum.

In the node configuration of node-3 it is

cluster.name: cad-elastisearch
node.name: node-1-gnbsx00044
path.data: /data/esdb/data
path.logs: /data/esdb/log
network.host: 10.129.212.45
http.host: 10.129.212.45
transport.host: 10.129.213.45
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.129.212.43","10.129.212.44","10.129.212.45"]
discovery.zen.minimum_master_nodes: 2

Again a type. Please discard the previous.

In the node configuration of node-3, it is

cluster.name: cad-elastisearch
node.name: node-1-gnbsx00045
path.data: /data/esdb/data
path.logs: /data/esdb/log
network.host: 10.129.212.45
http.host: 10.129.212.45
transport.host: 10.129.213.45
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.129.212.43","10.129.212.44","10.129.212.45"]
discovery.zen.minimum_master_nodes: 2

Unicast hosts should as far as I can tell use the private IP addresses.

Initially I tried using private IP list in unicast but the error was same. These nodes were unable to recognize each other.

Here is output of cluster health :

{"cluster_name":"lsf-elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":182,"active_shards":182,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":181,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.13774104683195}

Pardon for my keyboard problem. The last cluster health was wrongly pasted.
This is the current state :
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}

I normally don't specify the transport.host in my elasticsearch.yml files, so I missed that part in your settings.

When specifying transport.host, you must use those values, not the http.host, in your discovery.zen.ping.unicast.hosts configuration because Elasticsearch uses TCP for internal master / node discovery and HTTP only for client communication (searches etc).

To solve the issue you should either remove the transport.host settings, which will cause Elasticsearch to use the http.host instead, or configure the Zen discovery something like this:

discovery.zen.ping.unicast.hosts: ["10.129.213.43","10.129.213.44","10.129.213.45"]

where I've replaced the http hosts 10.129.212.4x with 10.129.213.4x. Either way should work.

Good luck!