i am having a cluster in which i have configured master node IPs in "cluster.initial_master_nodes" (ex: cluster.initial_master_nodes:["172.12.12.2:9300","172.12.12.3:9300","172.12.12.4:9300"]
) and it worked perfectly. now i am creating a new cluster with domain names(ex: cluster.initial_master_nodes:["es-m1:9300","es-m2:9300","es-m3:9300"]
) configured in "cluster.initial_master_nodes". the domain names can resolve to corresponding ip of node. Cluster bootstrapped but master nodes were not discovered.
Went through documentation and found that we must give hostname for "cluster.initial_master_nodes" property.
Identify the initial master nodes by their
node.name
, which defaults to their hostname. Ensure that the value incluster.initial_master_nodes
matches thenode.name
exactly.
I would like to know how at first the IP address worked and why domain didn't work. I couldn't find any resolving related exceptions in ES. But i found connection ConnectTransportException(address [127.0.0.1:9300], node [null], requesting [false] connection failed)
. I don't know what went wrong here. Whether the node name a.k.a hostname must be able to resolve to respective master nodes IP?