How to set up ES 7.1 cluster?

Hello, We just switched to Elasticsearch 7.0.0.
What is the correct setting in the elasticsearch.yml file?
We are having issues shown below:

Log node 2

[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] loaded module [x-pack-ml]
[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] loaded module [x-pack-monitoring]
[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] loaded module [x-pack-rollup]
[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] loaded module [x-pack-security]
[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] loaded module [x-pack-sql]
[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] loaded module [x-pack-watcher]
[2019-10-22T16:57:24,317][INFO ][o.e.p.PluginsService     ] [lumisportal-node-2] no plugins loaded
[2019-10-22T16:57:28,740][INFO ][o.e.x.s.a.s.FileRolesStore] [lumisportal-node-2] parsed [0] roles from file [E:\elasticsearch\config\roles.yml]
[2019-10-22T16:57:29,614][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [lumisportal-node-2] [controller/3020] [Main.cc@109] controller (64 bit): Version 7.1.0 (Build a8ee6de8087169) Copyright (c) 2019 Elasticsearch BV
[2019-10-22T16:57:30,067][DEBUG][o.e.a.ActionModule       ] [lumisportal-node-2] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-10-22T16:57:30,192][INFO ][o.e.d.DiscoveryModule    ] [lumisportal-node-2] using discovery type [zen] and seed hosts providers [settings]
[2019-10-22T16:57:31,223][INFO ][o.e.n.Node               ] [lumisportal-node-2] initialized
[2019-10-22T16:57:31,223][INFO ][o.e.n.Node               ] [lumisportal-node-2] starting ...
[2019-10-22T16:57:31,410][INFO ][o.e.t.TransportService   ] [lumisportal-node-2] publish_address {IP_internal:9300}, bound_addresses {IP_internal:9300}
[2019-10-22T16:57:31,410][INFO ][o.e.b.BootstrapChecks    ] [lumisportal-node-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-10-22T16:57:41,448][WARN ][o.e.c.c.ClusterFormationFailureHelper] [lumisportal-node-2] master not discovered yet: have discovered []; discovery will continue using [IP_internal:9300] from hosts providers and [{lumisportal-node-2}{TH8cWLD5TQa89m_uydyP2g}{_L8xfQ57Q7e0lU_y0ZjjPA}{IP_internal}{IP_internal:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-10-22T16:57:51,456][WARN ][o.e.c.c.ClusterFormationFailureHelper] [lumisportal-node-2] master not discovered yet: have discovered []; discovery will continue using [IP_internal:9300] from hosts providers and [{lumisportal-node-2}{TH8cWLD5TQa89m_uydyP2g}{_L8xfQ57Q7e0lU_y0ZjjPA}{IP_internal}{IP_internal:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-10-22T16:58:01,467][WARN ][o.e.n.Node               ] [lumisportal-node-2] timed out while waiting for initial discovery state - timeout: 30s
[2019-10-22T16:58:01,467][WARN ][o.e.c.c.ClusterFormationFailureHelper] [lumisportal-node-2] master not discovered yet: have discovered []; discovery will continue using [IP_internal:9300] from hosts providers and [{lumisportal-node-2}{TH8cWLD5TQa89m_uydyP2g}{_L8xfQ57Q7e0lU_y0ZjjPA}{IP_internal}{IP_internal:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-10-22T16:58:01,482][INFO ][o.e.h.AbstractHttpServerTransport] [lumisportal-node-2] publish_address {IP_internal:9200}, bound_addresses {IP_internal:9200}
[2019-10-22T16:58:01,482][INFO ][o.e.n.Node               ] [lumisportal-node-2] started
[2019-10-22T16:58:11,483][WARN ][o.e.c.c.ClusterFormationFailureHelper] [lumisportal-node-2] master not discovered yet: have discovered []; discovery will continue using [IP_internal:9300] from hosts providers and [{lumisportal-node-2}{TH8cWLD5TQa89m_uydyP2g}{_L8xfQ57Q7e0lU_y0ZjjPA}{IP_internal}{IP_internal:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

elasticsearch.yml node-1

cluster.name: lumisportal-cluster-1
node.name: lumisportal-node-1
node.master: true
node.data: true
bootstrap.memory_lock: true
network.host: 10.153.35.133
http.port: 9200
discovery.seed_hosts: ["ip-node-1", "ip-node-2"]
cluster.initial_master_nodes: ["lumisportal-node-1"]

elasticsearch.yml node-2

cluster.name: lumisportal-cluster-2
node.name: lumisportal-node-2
node.master: false
node.data: false
bootstrap.memory_lock: true
network.host: ip-node-2
http.port: 9200
discovery.seed_hosts: ["ip-note-2", "ip-node-1"]
cluster.initial_master_nodes: ["lumisportal-node-1"]

Rogerio,

You've configured your nodes with different cluster names. Also, on node-2, you have a typo in discovery.seed_hosts: "ip-note-2". :grinning:

Hi @Glen_Smith,

Thanks for answering my question.

To better understand my problem, I will report my elasticsearch.yml files.

I have 2 servers, I need to configure them to be clustered, but I don't know where the settings are wrong.

Server 1 - elasticsearch.yml

# ======================== Elasticsearch Configuration =========================   
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: lumisportal-cluster-1
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: lumisportal-node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
node.master: true
node.data: true
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.153.35.133
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.153.35.133", "10.153.35.100"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["10.153.35.133"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#transport.host: 10.153.35.133
#transport.tcp.port: 9300

Server 2 - elasticsearch.yml

# ======================== Elasticsearch Configuration =========================   
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: lumisportal-cluster-2
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: lumisportal-node-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
node.master: false
node.data: true
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.153.35.100
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.153.35.100", "10.153.35.133"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["10.153.35.133", "10.153.35.100"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#transport.host: 10.153.35.100
#transport.tcp.port: 9300

What is the correct setting so that I can have 2 nodes?

Rogerio,

The two configurations need to have the same cluster name.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.