Unicast_hosts.txt entries are not getting loaded

I have a node running in the cluster. Now starting the second node, My application creates unicast_hosts.txt in conf folder with ipaddress and port entry of the first node.
The second node is not reloading the new configuration,it is still not detecting Node1.
On restart of Node2, it detected Node1 and the returns both nodes on GET /_nodes.

Pls help me to understand why the unicast_hosts.txt entry is not getting loaded without a restart.

Which version are you using? What is the full path of your elasticsearch.yml file and of your unicast_hosts.txt file? Can you share the logs from the node when it's starting up?

Thanks for the reply.
We are running Elasticsearch6.6.
Path to files, /usr/share/elasticsearch/config/elasticsearch.yml
and /usr/share/elasticsearch/config/unicast_hosts.txt

Here is the startup log

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-02-18T16:51:58,152][INFO ][o.e.e.NodeEnvironment    ] [uTmHnAH] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [6.1gb], net total_space [9.6gb], types [ext4]
[2019-02-18T16:51:58,156][INFO ][o.e.e.NodeEnvironment    ] [uTmHnAH] heap size [1.9gb], compressed ordinary object pointers [true]
[2019-02-18T16:51:58,159][INFO ][o.e.n.Node               ] [uTmHnAH] node name derived from node ID [uTmHnAHOSiiFn8NAo9x-HA]; set [node.name] to override
[2019-02-18T16:51:58,160][INFO ][o.e.n.Node               ] [uTmHnAH] version[6.6.0], pid[11], build[oss/tar/a9861f4/2019-01-24T11:27:09.439740Z], OS[Linux/4.15.0-1027-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-02-18T16:51:58,161][INFO ][o.e.n.Node               ] [uTmHnAH] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-9153637189921725421, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Xms2g, -Xmx2g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2019-02-18T16:51:59,231][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [aggs-matrix-stats]
[2019-02-18T16:51:59,232][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [analysis-common]
[2019-02-18T16:51:59,232][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [ingest-common]
[2019-02-18T16:51:59,232][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [lang-expression]
[2019-02-18T16:51:59,232][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [lang-mustache]
[2019-02-18T16:51:59,233][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [lang-painless]
[2019-02-18T16:51:59,233][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [mapper-extras]
[2019-02-18T16:51:59,233][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [parent-join]
[2019-02-18T16:51:59,233][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [percolator]
[2019-02-18T16:51:59,233][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [rank-eval]
[2019-02-18T16:51:59,234][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [reindex]
[2019-02-18T16:51:59,234][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [repository-url]
[2019-02-18T16:51:59,234][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [transport-netty4]
[2019-02-18T16:51:59,234][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded module [tribe]
[2019-02-18T16:51:59,235][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded plugin [ingest-geoip]
[2019-02-18T16:51:59,235][INFO ][o.e.p.PluginsService     ] [uTmHnAH] loaded plugin [ingest-user-agent]
[2019-02-18T16:52:02,348][INFO ][o.e.d.DiscoveryModule    ] [uTmHnAH] using discovery type [zen] and host providers [settings, file]
[2019-02-18T16:52:02,907][INFO ][o.e.n.Node               ] [uTmHnAH] initialized
[2019-02-18T16:52:02,908][INFO ][o.e.n.Node               ] [uTmHnAH] starting ...
[2019-02-18T16:52:03,106][INFO ][o.e.t.TransportService   ] [uTmHnAH] publish_address {10.142.0.49:9300}, bound_addresses {0.0.0.0:9300}
[2019-02-18T16:52:03,120][INFO ][o.e.b.BootstrapChecks    ] [uTmHnAH] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-02-18T16:52:06,341][INFO ][o.e.c.s.ClusterApplierService] [uTmHnAH] detected_master {kNXDw19}{kNXDw198QlWeOGDWppgzcQ}{x75T0ipzSPihJJzjBQ7Gbg}{10.142.0.48}{10.142.0.48:9300}, added {{kNXDw19}{kNXDw198QlWeOGDWppgzcQ}{x75T0ipzSPihJJzjBQ7Gbg}{10.142.0.48}{10.142.0.48:9300},}, reason: apply cluster state (from master [master {kNXDw19}{kNXDw198QlWeOGDWppgzcQ}{x75T0ipzSPihJJzjBQ7Gbg}{10.142.0.48}{10.142.0.48:9300} committed version [3]])
[2019-02-18T16:52:06,367][INFO ][o.e.h.n.Netty4HttpServerTransport] [uTmHnAH] publish_address {192.168.128.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-02-18T16:52:06,367][INFO ][o.e.n.Node               ] [uTmHnAH] started

Hmm that all looks ok to me. What is discovery.zen.minimum_master_nodes set to in the elasticsearch.yml file on each node?

Sorry, should've also asked: have you set node.master: false on either of these nodes?

Have not set discovery.zen.minimum_master_nodes value and also both the nodes are starting as master node eligible. The requirement here is to start both the nodes first and the application provides the entry in unicasts_hosts.txt with the details of first node. So, will the second node start successfully if discovery.zen.minimum_master_nodes=1 and node.master: false ?

Ok, I think this is the problem. You need to set it to 2 on both nodes, or else to set node.master: false on one of the nodes.

We only read the unicast_hosts.txt file during discovery, i.e. when trying to find a master node. If discovery.zen.minimum_master_nodes is not set then the node will elect itself the master and will never try and find another master, which means it doesn't read the unicast_hosts.txt file. Then when you restart the node it will perform discovery again, but this time it will read the unicast_hosts.txt file, find the other node and join it instead of forming an independent cluster.

You can prevent this by setting discovery.zen.minimum_master_nodes: 2 on both nodes (so that each will wait for the other before forming a cluster), or else setting node.master: false on one of the nodes (so that that node will not proceed before finding the other node).

Thanks for the help.It is working fine as expected with setting discovery.zen.minimum_master_nodes: 2.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.