Two nodes cluster in 2.1.1 fails for high availability

Hi,

below is my elasticsearch configuration settings for each node on my cluster.

cluster.name: hits
node.name: "node1"
node.master: true
node.data: true
network.host: 127.11.11.1
http.port : 9210
transport.tcp.port : 9300-9400
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.11.11.1"]

cluster.name: hits
node.name: "node2"
node.master: false
node.data: true
network.host: 127.11.11.1
http.port : 9211
transport.tcp.port : 9300-9400
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.11.11.1"]

however am getting warnings when I index data, on master node am getting "received shard failed for [hits][0]" and on second node - "marking and sending shard failed due to [failed to create index]"

another issue when both are up and running am able to view data using http://127.11.11.1:9210 and http://127.11.11.1:9211, but when the master node is down the second node fails to displays data with reason "all shards failed".

How to create replicas of master node shards on second node?

Finally why the second node is not becoming as master on its own when master node is down? is 2.1.1 is not supporting this? or do i need to mention both of them as Master nodes?

Please help me.

Thanks,
Chala.

Simply put, when ES nodes are located on different machines, both must bind to a non-loopback valid IP address.
i.e.

network.host: 192.168.1.200

where 192.168.1.200 is the local ES IP address.
once that's done, both nodes can easily communicate accordingly and if master goes down the other comes up
if you want to create multiple replicas of master nodes just use one of the following methods:

method 1 using curl:

PUT /my_index/_settings
{
"number_of_replicas": 2
}

method 2 - changing elasticsearch.yml config file:

number_of_replicas: 2

Thanks Python_coder,

My master node stays on IP: 192.168.4.174 and my second node stays on IP: 192.168.2.96.

i have updated the below values on the master node

network.host: 192.168.4.174
http.port : 9210
discovery.zen.ping.unicast.hosts: ["192.168.2.96","192.168.4.174"]

and on second node

network.host: 192.168.4.174
http.port: 9212
discovery.zen.ping.unicast.hosts: ["192.168.2.96","192.168.4.174"]

post this change the master is up as usual, the second node throwing error -"Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]]
; nested: ChannelException[Failed to bind to: /192.168.4.174:9400]; nested: Bind
Exception[Cannot assign requested address: bind];" -

not sure why its not able to bind? how can I specify non-loopback IP? how to check whether an ip is non-loopback.

Thanks for you help its much appreciated.

Thanks,
Chala

The IP address you are trying to bind to on node 2 is the same as node 1 (192.168.4.174), you might copied the file from node1 to node2 using scp and just you forgot to change the binding IP address:

on Node2:

192.168.4.96
instead of:
192.168.4.174

Hi Python_code,

Thanks for your response, its working fine now after updating network.host to 192.168.4.96.

so now, I can access my ES from http://192.168.4.174: 9200 (node 1) and http://192.168.2.96:9200 (node 2) i have update the http.port: 9200 on both the places.

when my node 1 is down, node 2 updates it status to master and works fine, but how can i specify 2 url's for ES in my application to maintain high availability ? should i create Client node which will take care of sending requests for other nodes?

Thanks,
Chala.

Fore honesty i didn't get it, could you reform the question please :slightly_smiling: ? i believe that you want to query both ES at a time, isn't it? if you mean so:
ES is distributed by nature, it is built to be always available and it is said on the official documentation:

Scale can come from buying bigger servers (vertical scale, or scaling up) or from buying more servers (horizontal scale, or scaling out).

however, since you have multiple nodes on your implementation which is considered as horizontal scale (separate servers), they are working together to share their data and workload.

since my ES set up is on two different machine, each node on each machine.

I want to have single point of contact for my ES and internally it should manage to send requests to node which is available, to achieve this should i use client node? or any proxy? or DNS host settings?

Thanks,
Chala.

Client node would do the bidding for you, but it's like you mean how to route queries to the physically available server. because it could happen that the server is available (IP) but elasticsearch is down, in this case queries would be sent to the non available service (ES instance) if you will ( im not sure).

in order to monitor which ES is available you can use softwares like HAPROXY or keepalived.

Please keep in mind that i'm not sure of my answer, just re ask the question somewhere and feed me in.
Regards :slightly_smiling: