Running Multiple nodes on localhost, but Kibana shows number_of_nodes=1

I'm running 2 nodes on my localhost. The nodes found each other successfully as shown in the log of the second node below. But when running GET /_cluster/health in kibana, it shows only 1 node.
{
"cluster_name" : "elasticsearch_cluster_1",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 55.55555555555556
}

Log node-2
[2020-02-21T09:35:33,816][INFO ][o.e.c.c.ClusterBootstrapService] [node-2] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2020-02-21T09:35:33,977][INFO ][o.e.c.s.MasterService ] [node-2] elected-as-master ([1] nodes joined)[{node-2}{0CkLTR0CTGW3aJc3CJmhtw}{_66G_mZqSXGLucftZuTtnA}{localhost}{127.0.0.1:9301}{dilm}{ml.machine_memory=25718398976, xpack.installed=true, ml.max_open_jobs=20} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 10, version: 79, delta: master node changed {previous , current [{node-2}{0CkLTR0CTGW3aJc3CJmhtw}{_66G_mZqSXGLucftZuTtnA}{localhost}{127.0.0.1:9301}{dilm}{ml.machine_memory=25718398976, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-02-21T09:35:34,090][INFO ][o.e.c.s.ClusterApplierService] [node-2] master node changed {previous , current [{node-2}{0CkLTR0CTGW3aJc3CJmhtw}{_66G_mZqSXGLucftZuTtnA}{localhost}{127.0.0.1:9301}{dilm}{ml.machine_memory=25718398976, xpack.installed=true, ml.max_open_jobs=20}]}, term: 10, version: 79, reason: Publication{term=10, version=79}
[2020-02-21T09:35:34,187][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {localhost/127.0.0.1:9201}, bound_addresses {127.0.0.1:9201}, {[::1]:9201}
[2020-02-21T09:35:34,188][INFO ][o.e.n.Node ] [node-2] started
[2020-02-21T09:35:34,369][INFO ][o.e.l.LicenseService ] [node-2] license [5c0a31a7-ec09-42f4-835f-46bfa2793254] mode [basic] - valid
[2020-02-21T09:35:34,371][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-2] Active license is now [BASIC]; Security is disabled
[2020-02-21T09:35:34,385][INFO ][o.e.g.GatewayService ] [node-2] recovered [3] indices into cluster_state
[2020-02-21T09:35:35,138][INFO ][o.e.c.r.a.AllocationService] [node-2] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).

This log does not indicate that this node discovered any other nodes as far as I can see. It only mentions node-2 throughout.

It also tells us that you did not correctly configure cluster bootstrapping, so this note in the doc applies and describes the solution to your issue.

The only config I've set is the one named in the documentation

Node 1 elasticsearch.yml
cluster.name: elasticsearch_cluster_1
node.name: node-1
network.host: localhost

Node 2 elasticsearch.yml
cluster.name: elasticsearch_cluster_1
node.name: node-2
network.host: localhost

Any idea why the nodes can't find each other?

Yes, you haven't read the link I shared above, nor the docs for these important settings.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.