Can't create cluster with 2 nodes on elasticsearch version 2.2.2

Hello there! :slight_smile:

We have a single node of Elasticsearch version 2.2.2 and we decided to extend this cluster to two nodes, with replicas and so.
But problem is that these two nodes doesn't see each other. =(

Here's elasticsearch.yml output second node:


cluster.name: awesomecluster
cluster.routing.allocation.disk.watermark.high: 97%
cluster.routing.allocation.disk.watermark.low: 96%
cluster.routing.allocation.enable: all
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
- 192.168.1.125
- 192.168.1.24
index.number_of_replicas: 1
network.host: 192.168.1.24
node.name: es2
script.engine.groovy.inline.aggs: true
script.engine.groovy.inline.search: true

first node (with data):

cluster.name: awesomecluster
cluster.routing.allocation.disk.watermark.high: 97%
cluster.routing.allocation.disk.watermark.low: 96%
cluster.routing.allocation.enable: all
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
- 192.168.1.125
- 192.168.1.24
index.number_of_replicas: 1
network.host: 192.168.1.125
node.name: es1
script.engine.groovy.inline.aggs: true
script.engine.groovy.inline.search: true

Here's logs from es1 and es2 nodes:

es2

[2023-01-17 18:43:24,979][INFO ][node                     ] [es2] version[2.2.2], pid[96220], build[fcc01dd/2016-03-29T08:49:35Z]
[2023-01-17 18:43:24,979][INFO ][node                     ] [es2] initializing ...
[2023-01-17 18:43:25,290][INFO ][plugins                  ] [es2] modules [lang-expression, lang-groovy], plugins [], sites []
[2023-01-17 18:43:25,305][INFO ][env                      ] [es2] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/vdb1)]], net usable_space [5tb], net total_space [7.7tb], spins? [possibly], types [ext4]
[2023-01-17 18:43:25,305][INFO ][env                      ] [es2] heap size [7.9gb], compressed ordinary object pointers [true]
[2023-01-17 18:43:26,318][INFO ][node                     ] [es2] initialized
[2023-01-17 18:43:26,318][INFO ][node                     ] [es2] starting ...
[2023-01-17 18:43:26,421][INFO ][transport                ] [es2] publish_address {es2/192.168.1.24:9300}, bound_addresses {0.0.0.0:9300}
[2023-01-17 18:43:26,427][INFO ][discovery                ] [es2] awesomecluster/7uh156jiQ3-xx9uCX7_2cQ
[2023-01-17 18:43:29,443][INFO ][cluster.service          ] [es2] new_master {es2}{7uh156jiQ3-xx9uCX7_2cQ}{192.168.1.24}{es2/192.168.1.24:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2023-01-17 18:43:29,468][INFO ][http                     ] [es2] publish_address {es2/192.168.1.24:9200}, bound_addresses {0.0.0.0:9200}
[2023-01-17 18:43:29,468][INFO ][node                     ] [es2] started
[2023-01-17 18:43:29,470][INFO ][gateway                  ] [es2] recovered [0] indices into cluster_state

es1


2023-01-17 17:25:47,729][INFO ][node                     ] [es1] version[2.2.2], pid[92695], build[fcc01dd/2016-03-29T08:49:35Z]
[2023-01-17 17:25:47,730][INFO ][node                     ] [es1] initializing ...
[2023-01-17 17:25:48,043][INFO ][plugins                  ] [es1] modules [lang-expression, lang-groovy], plugins [], sites []
[2023-01-17 17:25:48,058][INFO ][env                      ] [es1] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/vdb1)]], net usable_space [4.5tb], net total_space [7.6tb], spins? [possibly], types [ext4]
[2023-01-17 17:25:48,058][INFO ][env                      ] [es1] heap size [7.9gb], compressed ordinary object pointers [true]
[2023-01-17 17:25:49,247][INFO ][node                     ] [es1] initialized
[2023-01-17 17:25:49,247][INFO ][node                     ] [es1] starting ...
[2023-01-17 17:25:49,347][INFO ][transport                ] [es1] publish_address {es1/192.168.1.125:9300}, bound_addresses {0.0.0.0:9300}
[2023-01-17 17:25:49,353][INFO ][discovery                ] [es1] awesomecluster/vF8iXeYpQmCB0yozjRL7fA
[2023-01-17 17:25:52,370][INFO ][cluster.service          ] [es1] new_master {es1}{vF8iXeYpQmCB0yozjRL7fA}{192.168.1.125}{es1/192.168.1.125:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2023-01-17 17:25:52,395][INFO ][http                     ] [es1] publish_address {es1/192.168.1.125:9200}, bound_addresses {0.0.0.0:9200}
[2023-01-17 17:25:52,395][INFO ][node                     ] [es1] started
[2023-01-17 17:25:53,016][INFO ][gateway                  ] [es1] recovered [124] indices into cluster_state
[2023-01-17 17:25:57,935][INFO ][cluster.routing.allocation] [es1] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[wsman][4], [wsman][4]] ...]).

Can you help me with this?)

P.s
Telnet with ip and port - works well

Your nodes won't talk with each other this way, you have now two cluster with the same name because both the nodes are master eligible and you set minimun_master_nodes to 1, this is know as split brain.

If you want to add another node to your current cluster you need to set the node roles of your second node and do not make this node a master-eligible node.

Basically you will need this in your elasticsearch.yml as explained in the documentation.

node.master: false
node.data: true

I'm not sure, but I think that you would also need to delete the data in the new node for it to be able to join the cluster since it already formed a cluster, trying to add it now may have undesired consequences.

After this you will be able to form a cluster with two nodes, but if your master node goes down, your entire cluster will be unvailable.

If you want some kind of resilience you would need to have three nodes and change minimun_master_nodes to 2.

One important thing, version 2.X is way past its EOL, I'm not sure if those changes will work as I haven't worked with version 2 for many years, do it at your own risk.

Yeah, it's very, very. very old, please see the EOL page.

Ok, got it!
Thank you very much!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.