Difficulty adding nodes

Thanks for looking. I have tried it with cluster.initial_master_nodes commented out. Also with just elastict1, and with elastic2 and elastic3. Same results every time.

The only setting that seems to change the _cluster/health is gateway.recover_after_nodes. Setting it to 2 will start but with a "status": "red". A value of 1 (or commented out) will start with a "status": "green".

I found someone else that was getting this same issue. In fact, I recognize his configs. He used one of the 5 different example/tutorials I saw. So, I'm not the only one running into this.
He had the same, or similar, issue

Hari.v said he got his cluster to work by adding "node.master: true , node.data: true for all the 3 nodes". I modified my config to mach his (again), but now I have three ES servers each running as individual (single node) clusters.

Below is the log file from elastic2, started after elastic1. Elastic1 has the exact same log message, but with it's own IP:

[2019-06-12T13:13:47,739][INFO ][o.e.n.Node               ] [elastic2] starting ...
[2019-06-12T13:13:47,887][INFO ][o.e.t.TransportService   ] [elastic2] publish_address {10.192.10.62:9300}, bound_addresses {10.192.10.62:9300}
[2019-06-12T13:13:47,895][INFO ][o.e.b.BootstrapChecks    ] [elastic2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-06-12T13:13:47,903][INFO ][o.e.c.c.Coordinator      ] [elastic2] cluster UUID [gnWCd64oQlenU0d1eVaL6Q]
[2019-06-12T13:13:48,012][INFO ][o.e.c.s.MasterService    ] [elastic2] elected-as-master ([1] nodes joined)[{elastic2}{ML6K63J2Tvmx-WsPqeaeHA}{32880EVrSFyvCKy2evsXyA}{10.192.10.62}{10.192.10.62:9300}{ml.machine_memory=1
6819339264, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 45, version: 119, reason: master node changed {previous [], current [{elastic2}{ML6K63J2Tvmx-WsPqeaeHA
}{32880EVrSFyvCKy2evsXyA}{10.192.10.62}{10.192.10.62:9300}{ml.machine_memory=16819339264, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-06-12T13:13:48,072][INFO ][o.e.c.s.ClusterApplierService] [elastic2] master node changed {previous [], current [{elastic2}{ML6K63J2Tvmx-WsPqeaeHA}{32880EVrSFyvCKy2evsXyA}{10.192.10.62}{10.192.10.62:9300}{ml.machin
e_memory=16819339264, xpack.installed=true, ml.max_open_jobs=20}]}, term: 45, version: 119, reason: Publication{term=45, version=119}
[2019-06-12T13:13:48,176][INFO ][o.e.h.AbstractHttpServerTransport] [elastic2] publish_address {10.192.10.62:9200}, bound_addresses {10.192.10.62:9200}
[2019-06-12T13:13:48,176][INFO ][o.e.n.Node               ] [elastic2] started
[2019-06-12T13:13:48,401][INFO ][o.e.l.LicenseService     ] [elastic2] license [accb73b4-6b25-43ec-a3e4-a73575d5e69c] mode [basic] - valid
[2019-06-12T13:13:48,413][INFO ][o.e.g.GatewayService     ] [elastic2] recovered [0] indices into cluster_state