Hi,
I have gone through a lot of threads around this topic and it seems like the answer is No, but just wanted to confirm.
Here are the details:
Can't use multicast
Most common scenario will have only 2 nodes (hence both will be master). May have more than 2 in some installations -- but lets ignore this for now.
Don't have list of IPs upfront. The 2 nodes may die/restart and I want to avoid having to restart the OTHER nodes to update the IP.
For ex I did this:
Started Node1 with unicast.hosts = Node1
Started Node2 with unicast.hosts = Node2 (I could do this since I knew Node1 by now).
Now if Node1 gets killed and restarted, I see it never joins the cluster even though Node1 and Node2 have the same cluster name.
The only way I understand this may work is that when Node2 is added, I should go and change Node1's unicast.hosts to be [Node1, Node2]. But this will mean that elasticsearch is being restarted on Node1 unnecessarily. Am I right?
I also read that the cluster update API ignores unicast.hosts as it is not dynamically update-able.
Linux daemons usually support -HUP signal mechanism to re-read configuration file without restarting the process. Does ES support this? If not, is there ANY way to NOT have to restart Node1 to update the unicast.hosts?
Hopefully I explained my question clearly.
Thank you.
I assume that you meant "master node IPs and ports", and if instead you meant "on every node", yes, this is reasonable, fairly standard, and the recommended way to do it.
Correct. This is what I ended up doing. Whenever node1 has to restart (because it crashed or any such reason), I then update its unicast.hosts before starting elasticsearch on it. Thanks.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.