Ok, the discovery addresses in the second node are still using port 9200, and the node name is a different format, so I think you're not using the right config file (or at least you're not sharing the right config file with us).
Also the cluster UUID doesn't match, indicating that this node belonged to a different cluster (possibly on its own) earlier in its life. You need to shut this node down, fix the config, wipe its data, and then it should join the cluster.
Completly agree,
stop everything, delete all dir which has your elasticsearch data. and start fresh.
turn on daemon on all three node with correct config file and it should form cluster.
Okay after some struggles I was able to get the cluster working and another node joined up with it.
xpack.security.enabled: false
Deleting this line in my elasticsearch.yml file it just started working. And now I can run the cluster with a master and non-master. The thing I am troubled with now is how to get data from one point to another. I am getting an index coming through to kibana, but it doesn't give me any data. Right now I have a SQL server and I have heartbeats setup and running on that. I know I am getting data from it because when I look at logstash I am getting logs from heartbeats. I just don't know why the visualization isn't working.
I've no idea, sorry, this thread has wandered a long way from its original problem, which sounds like it's resolved. I would recommend starting a new thread and describing things again because it's almost impossible to work out what information here is still pertinent and what you have changed.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.