Two nodes cluster in 2.1.1 fails for high availability


below is my elasticsearch configuration settings for each node on my cluster. hits "node1"
node.master: true true
http.port : 9210
transport.tcp.port : 9300-9400 false [""] hits "node2"
node.master: false true
http.port : 9211
transport.tcp.port : 9300-9400 false [""]

however am getting warnings when I index data, on master node am getting "received shard failed for [hits][0]" and on second node - "marking and sending shard failed due to [failed to create index]"

another issue when both are up and running am able to view data using and, but when the master node is down the second node fails to displays data with reason "all shards failed".

How to create replicas of master node shards on second node?

Finally why the second node is not becoming as master on its own when master node is down? is 2.1.1 is not supporting this? or do i need to mention both of them as Master nodes?

Please help me.


Simply put, when ES nodes are located on different machines, both must bind to a non-loopback valid IP address.

where is the local ES IP address.
once that's done, both nodes can easily communicate accordingly and if master goes down the other comes up
if you want to create multiple replicas of master nodes just use one of the following methods:

method 1 using curl:

PUT /my_index/_settings
"number_of_replicas": 2

method 2 - changing elasticsearch.yml config file:

number_of_replicas: 2

Thanks Python_coder,

My master node stays on IP: and my second node stays on IP:

i have updated the below values on the master node
http.port : 9210 ["",""]

and on second node
http.port: 9212 ["",""]

post this change the master is up as usual, the second node throwing error -"Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]]
; nested: ChannelException[Failed to bind to: /]; nested: Bind
Exception[Cannot assign requested address: bind];" -

not sure why its not able to bind? how can I specify non-loopback IP? how to check whether an ip is non-loopback.

Thanks for you help its much appreciated.


The IP address you are trying to bind to on node 2 is the same as node 1 (, you might copied the file from node1 to node2 using scp and just you forgot to change the binding IP address:

on Node2:
instead of:

Hi Python_code,

Thanks for your response, its working fine now after updating to

so now, I can access my ES from 9200 (node 1) and (node 2) i have update the http.port: 9200 on both the places.

when my node 1 is down, node 2 updates it status to master and works fine, but how can i specify 2 url's for ES in my application to maintain high availability ? should i create Client node which will take care of sending requests for other nodes?


Fore honesty i didn't get it, could you reform the question please :slightly_smiling: ? i believe that you want to query both ES at a time, isn't it? if you mean so:
ES is distributed by nature, it is built to be always available and it is said on the official documentation:

Scale can come from buying bigger servers (vertical scale, or scaling up) or from buying more servers (horizontal scale, or scaling out).

however, since you have multiple nodes on your implementation which is considered as horizontal scale (separate servers), they are working together to share their data and workload.

since my ES set up is on two different machine, each node on each machine.

I want to have single point of contact for my ES and internally it should manage to send requests to node which is available, to achieve this should i use client node? or any proxy? or DNS host settings?


Client node would do the bidding for you, but it's like you mean how to route queries to the physically available server. because it could happen that the server is available (IP) but elasticsearch is down, in this case queries would be sent to the non available service (ES instance) if you will ( im not sure).

in order to monitor which ES is available you can use softwares like HAPROXY or keepalived.

Please keep in mind that i'm not sure of my answer, just re ask the question somewhere and feed me in.
Regards :slightly_smiling: