I'm running 2 nodes on my localhost. The nodes found each other successfully as shown in the log of the second node below. But when running GET /_cluster/health in kibana, it shows only 1 node.
{
"cluster_name" : "elasticsearch_cluster_1",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 55.55555555555556
}
Log node-2
[2020-02-21T09:35:33,816][INFO ][o.e.c.c.ClusterBootstrapService] [node-2] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2020-02-21T09:35:33,977][INFO ][o.e.c.s.MasterService ] [node-2] elected-as-master ([1] nodes joined)[{node-2}{0CkLTR0CTGW3aJc3CJmhtw}{_66G_mZqSXGLucftZuTtnA}{localhost}{127.0.0.1:9301}{dilm}{ml.machine_memory=25718398976, xpack.installed=true, ml.max_open_jobs=20} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 10, version: 79, delta: master node changed {previous , current [{node-2}{0CkLTR0CTGW3aJc3CJmhtw}{_66G_mZqSXGLucftZuTtnA}{localhost}{127.0.0.1:9301}{dilm}{ml.machine_memory=25718398976, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-02-21T09:35:34,090][INFO ][o.e.c.s.ClusterApplierService] [node-2] master node changed {previous , current [{node-2}{0CkLTR0CTGW3aJc3CJmhtw}{_66G_mZqSXGLucftZuTtnA}{localhost}{127.0.0.1:9301}{dilm}{ml.machine_memory=25718398976, xpack.installed=true, ml.max_open_jobs=20}]}, term: 10, version: 79, reason: Publication{term=10, version=79}
[2020-02-21T09:35:34,187][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {localhost/127.0.0.1:9201}, bound_addresses {127.0.0.1:9201}, {[::1]:9201}
[2020-02-21T09:35:34,188][INFO ][o.e.n.Node ] [node-2] started
[2020-02-21T09:35:34,369][INFO ][o.e.l.LicenseService ] [node-2] license [5c0a31a7-ec09-42f4-835f-46bfa2793254] mode [basic] - valid
[2020-02-21T09:35:34,371][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-2] Active license is now [BASIC]; Security is disabled
[2020-02-21T09:35:34,385][INFO ][o.e.g.GatewayService ] [node-2] recovered [3] indices into cluster_state
[2020-02-21T09:35:35,138][INFO ][o.e.c.r.a.AllocationService] [node-2] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).