My cluster nodes has 3 master nodes with discover sets to 2 but they are not able to join.
failed to send join request to master
OS : redhat and default networking is mutlicast but in Elasticsearch.yml file I'm using unicast without disabling settings of mutlicast networking in redhat and the following is the error:
failed to send join request to master
@warkolm Thanks for the help and I sort it , it due to the installation was copied accross the machines and data folder contained the same node and it created the error and I'm creating a 11 node cluster , 3 master nodes, 2 data nodes, 2 coordinating nodes, 2 kibanas, . Would please let me know how to define the discovery for zen hosts for all servers, I think I should put all the nodes in a single cluster like this discovery.zen.ping.unicast.hosts: ["esmaster1:8510","xxxx:8510","master301:8530","datanode:8530","datanode:8530","kibana:8540","kibana:8540"] and please help me to set the password for kibana
I would also want to get a clarity on # Block initial recovery after a full cluster restart until N nodes are started:
#gateway.recover_after_nodes: 3
I have 3 master nodes,2 data nodes, 2 kibanas,2 coordinating nodes,2 logstash, I'm little confussed about the configuaration, not sure what to mention when 1 logtash gone down , how do I would keep this
You only need to set discovery.zen.ping.unicast.hosts to the master nodes.
Also you are using a non-standard port when you define the discovery.zen.ping.unicast.hosts, but you don't appear to be binding the transport port accordingly.
Perhaps you can also post your logs that show Elasticsearch startup.
@warkolm from your point view,set discovery.zen.ping.unicast.hosts should be only for master nodes, not coordinating /data nodes
how do I configure for data nodes and cooordinating nodes, and I'm binding the tcp transport port and http port not to default port for security reasons and please find my logs over here and I do get a transport exception in my logs and pls help with it
Does Xpack Compulsory needs Data ingestion nodes and my Xpack throws the error need data ingestion nodes all the time
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.