Elasticsearch doesn't find the cluster

Hello,

i'm trying to connect a elasticsearch search node (node2) in a server to a already existing elasticsearch node (node1) in another server with the goal of data replication from node1 to node2. The problem appear after the configuration on elasticsearch.yml, i think i did everything accordingly to what i need, but when i check the attributes of my elasticsearch i get the following result:

Elasticsearch node1:

 {
   "name" : "AvantNodeold_1",
   "cluster_name" : "AvantData",
   "cluster_uuid" : "83EJmDNrRVirBWcZDgs9ew",
   "version" : {
   "number" : "5.6.9",
   "build_hash" : "877a590",
   "build_date" : "2018-04-12T16:25:14.838Z",
   "build_snapshot" : false,
   "lucene_version" : "6.6.1"
  },
 "tagline" : "You Know, for Search"

}

Elasticsearch node2:

{
"name" : "Avantnodenew_1",
"cluster_name" : "AvantData",
"cluster_uuid" : "na",
"version" : {
"number" : "5.6.9",
"build_hash" : "877a590",
"build_date" : "2018-04-12T16:25:14.838Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}

in the result you can see that the cluster_uuid from node2 doesn't match the one in node1 (the pre-existing node). Did i configure something wrong? What could be the problem?

ELASTICSEARCH NODE1 (elasticsearch.yml):

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: AvantData

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: AvantNodeold_1

Add custom attributes to the node:

#node.attr.rack: r1
node.attr.tipono: hot

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

path.data: /home/AvantData/dados/

Path to log files:

path.logs: /home/AvantData/logs/

Path to backup files(snapshots):

#path.repo: /mnt/backup/AvantData

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 192.168.102.5

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.102.5", "192.168.102.60"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes: 3
discovery.zen.minimum_master_nodes: 1

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true
node.master : true
node.data : true
node.ingest : false
indices.query.bool.max_clause_count: 10000

ELASTICSEARCH NODE2 (elasticsearch.yml):

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: AvantData

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: Avantnodenew_1

Add custom attributes to the node:

#node.attr.rack: r1
node.attr.tipono: hot

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data
path.data: /avantdata/dados

Path to log files:

#path.logs: /path/to/logs
path.logs: /avantdata/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 0.0.0.0

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.102.5"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

discovery.zen.minimum_master_nodes: 2

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

I have the same issue too! Actually trying to resolve it, but no success.

Hi @DanV

In "discovery.zen.ping.unicast.hosts" in both files, you need to add the IP of all the nodes in the cluster, if we have dedicated nodes, it is just the IP of the master.

In "discovery.zen.minimum_master_nodes:" in both files, you need have a same number, in a cluster with 2 nodes, will be impossible avoid a split brain, but in your case a correct number is 2. (total number of master-eligible nodes / 2 + 1)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.