We have elasticsearch cluster with three nodes. Now all are connected each other and each have it's own public IPS too. From curl commands it's working fine. In our program which running on a different network, need to connect with cluster. It's working when we connect with anyone of the nodes. It seems to be connected node is master.
Now suppose the connected master node fails because of hardware failure or any similar issue... Then how can my program reach to the cluster? Because the master IP is mentioned in my program side.
All of the language clients support providing a list of nodes to connect to, and will automatically fail over in case a failure is detected. Another option would be to put a load balancer in from of the cluster and connect through this.
Hey Christian, thanks for the reply.
Our Program language is Java.
We are using a load balancer , but facing issue with IP address switch property of load balancer. Even though using load balancer pointed domain name inside program, request is going through random IP address.
So sometimes getting error like nodes are not available . It will work fine once we restart the program again..
So just wants to know which is the best approach when working with cluster..?
Both our Java language clients (REST and transport protocol based)) support specifying multiple nodes and handles failover when required, so you should be able to connect without a load balancer.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.