2 nodes with the same cluster but in configured in different machines

I'm about to make 2 nodes with the same cluster but each node is configured in separate elasticsearch.yml in different machines.

For the first node is in machine 1:
cluster.name: sql
node.name: engineering
node.master: true
node.data: true
network.host: 127.0.0.3
transport.tcp.port: 9300-9400
http.port: 9200
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.3", "127.0.0.2:9200"]
index.number_of_shards: 2
index.number_of_replicas: 1
network.bind_host: 127.0.0.3
network.publish_host: 127.0.0.3

Second node is in second machine:
cluster.name: sql
node.name: team
node.master: false
node.data: true
network.host: 127.0.0.2
http.port: 9200
transport.tcp.port: 9300-9400
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [127.0.0.3","127.0.0.2:9200"]
index.number_of_shards: 2
index.number_of_replicas: 1
network.bind_host: 127.0.0.2
network.publish_host: 127.0.0.2

I cannot seem to connect the 2 nodes.

Change

 discovery.zen.ping.unicast.hosts: [127.0.0.3","127.0.0.2:9200"]

To:

 discovery.zen.ping.unicast.hosts: [127.0.0.3","127.0.0.2:9300"]

Or

  discovery.zen.ping.unicast.hosts: [127.0.0.3","127.0.0.2"]

I tried running the nodes as a service in windows but I still cannot see other node although I can run my first node on its machine but node2 cannot be seen in my elasticsearch-head.

You need to set it to a non-loopback address, 127.0.0.0/8 is entirely loopback.

See Connectivity issues with a new/upgraded 2.X cluster? Read here first :)

Hi Warkolm,

i need to set my master node to 127.0.0.0/8? and the other nodes I can set it to other IPs? as long as my Master node holds the nonloop back address?

No, they all need to be non-loopback IPs.

1 Like

Hi Warkolm,

For the first node is in machine 1:
cluster.name: sql
node.name: engineering
node.master: true
node.data: true
network.host: 127.0.0.0
transport.tcp.port: 9300-9400
http.port: 9200
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [?]
index.number_of_shards: 2
index.number_of_replicas: 1
network.bind_host: 127.0.0.0
network.publish_host: 127.0.0.0

Second node is in second machine:
cluster.name: sql
node.name: team
node.master: false
node.data: true
network.host: 127.0.0.0
http.port: 9200
transport.tcp.port: 9300-9400
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [?]
index.number_of_shards: 2
index.number_of_replicas: 1
network.bind_host: 127.0.0.0
network.publish_host: 127.0.0.0

Are the changes correct for my nodes? what should I place for the discovery.zen.ping.unicast.host?

You cannot use 127.0.0.*, at all.
Take a read of https://en.wikipedia.org/wiki/Localhost

Please don't use 127.x.x.x as your node IP. Change both server IPs to something like 192.168.1.11 and 192.168.1.12

Arianayay,

Any luck on this? did you solve this issue, am on the same boat need some help.

Thanks,
Chala.

Arianayay,

Am able to fix this no known master issue, here is my .yml configuration settings

cluster.name: hits
node.name: "node1"
node.master: true
node.data: true
network.host: 127.11.11.1
http.port : 9210
transport.tcp.port : 9300-9400
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.11.11.1"]

cluster.name: hits
node.name: "node2"
node.master: false
node.data: true
network.host: 127.11.11.1
http.port : 9211
transport.tcp.port : 9300-9400
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.11.11.1"]

however am getting warnings when I index data, on master node am getting "received shard failed for [hits][0]" and on second node - "marking and sending shard failed due to [failed to create index]"

another issue when both are up and running am able to view data using http://127.11.11.1:9210 and http://127.11.11.1:9211, but when the master node is down the second node fails to displays data with reason "all shards failed".

How to create copy of data to second node?
If I create replicas of shards how to map them to store it like - master node shard's replica on second node so that i will read data even the master fails

finally why the second node is not becoming as master on its own when master node is down? is 2.1.1 is not supporting this? Please help me.

Thanks,
Chala

@chalapathi it'd be better if you started your own thread please.

Sure, Thanks Warkolm

definetaly true

Since both ES are on different machines, both must bind on an non-loopback valid IP address, and for the copy of data, you can either use curl and change settings to the following:

PUT /my_index/_settings
{
"number_of_replicas": 2
}

or change the number_of_replicas directive on elasticsearch.yml config file:

number_of_replicas: 2

Hi I already solved this issue. They're correct use the IP of the server do not initiate any IP but use the one in your own machine/server and it will work accordingly.

Hi
We have one code base to connect elastic search (localhost:9200). We deployed this code on two different boxes under load balancing server. In this case, how to configure ES in 2 different machines to connect ES and index should reflect both sides.

Please start your own thread, this one is very old.