[SOLVED] Don't succeed to create cluster ES 7.1

Hi,

I'm trying to setup a 3 nodes cluster.

My configuration is :

cluster.name: dxproelsme
node.name: dxproelsme01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: _site_
discovery.seed_hosts:
  - 172.22.2.81
  - 172.22.2.82
  - 172.22.2.83
cluster.initial_master_nodes:
  - 172.22.2.81
  - 172.22.2.82
  - 172.22.2.83
node.master: true
node.data: true
xpack.security.enabled: false

No erros in logs.
Only one node in cluster on each node.

curl http://172.22.2.81:9200/_cluster/health?pretty

{
"cluster_name" : "dxproelsme",
"status" : "green",
"time_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes": 1,
"active_primary_shards" : 0,
"active_shards": 0 , 

....
}

Thanks for your help.

network.host: "site" might not bind to the correct network if you have multiple network interface. You should be the specific network interface or IP address

Eg. If all your network interface, you want to bind "eth0"
or simply just hardcode the IP address "172.22.2.81"

I have only one IP address.

I also try with IP address but i have the same behaviour.

I see the good IP address in logs.

Try this? network.host will be different for each host
Also see /var/log/messages for any additional error logs

cluster.name: dxproelsme
node.name: dxproelsme01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.22.2.81
discovery.seed_hosts:
  - 172.22.2.81
  - 172.22.2.82
  - 172.22.2.83
cluster.initial_master_nodes:
  - dxproelsme01
  - dxproelsme02
  - dxproelsme03

I tried your configuration.

Same behaviour.

Logs :

[2019-05-31T11:58:35,477][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [dxproelsme01] [controller/16133] [Main.cc@109] controller (64 bit): Version 7.1.0 (Build a8ee6de8087169) Copyright (c) 2019 Elasticsearch BV [2019-05-31T11:58:37,461][DEBUG][o.e.a.ActionModule ] [dxproelsme01] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security [2019-05-31T11:58:42,224][INFO ][o.e.d.DiscoveryModule ] [dxproelsme01] using discovery type [zen] and seed hosts providers [settings] [2019-05-31T11:58:46,325][INFO ][o.e.n.Node ] [dxproelsme01] initialized [2019-05-31T11:58:46,326][INFO ][o.e.n.Node ] [dxproelsme01] starting ... [2019-05-31T11:58:46,551][INFO ][o.e.t.TransportService ] [dxproelsme01] publish_address {172.22.2.81:9300}, bound_addresses {172.22.2.81:9300} [2019-05-31T11:58:46,567][INFO ][o.e.b.BootstrapChecks ] [dxproelsme01] bound or publishing to a non-loopback address, enforcing bootstrap checks [2019-05-31T11:58:46,631][INFO ][o.e.c.c.Coordinator ] [dxproelsme01] cluster UUID [Ki2hYWPYTH-XL3Fm6sIKxg] [2019-05-31T11:58:46,816][INFO ][o.e.c.s.MasterService ] [dxproelsme01] elected-as-master ([1] nodes joined)[{dxproelsme01}{FFrdlUYsT6S9RKDwyLzs9g}{th5QPlbWSO2PLFyOrsUUjQ}{172.22.2.81}{172.22.2.81:9300}{ml.machine_memory=16658423808, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 16, version: 50, reason: master node changed {previous [], current [{dxproelsme01}{FFrdlUYsT6S9RKDwyLzs9g}{th5QPlbWSO2PLFyOrsUUjQ}{172.22.2.81}{172.22.2.81:9300}{ml.machine_memory=16658423808, xpack.installed=true, ml.max_open_jobs=20}]} [2019-05-31T11:58:46,989][INFO ][o.e.c.s.ClusterApplierService] [dxproelsme01] master node changed {previous [], current [{dxproelsme01}{FFrdlUYsT6S9RKDwyLzs9g}{th5QPlbWSO2PLFyOrsUUjQ}{172.22.2.81}{172.22.2.81:9300}{ml.machine_memory=16658423808, xpack.installed=true, ml.max_open_jobs=20}]}, term: 16, version: 50, reason: Publication{term=16, version=50} [2019-05-31T11:58:47,053][INFO ][o.e.h.AbstractHttpServerTransport] [dxproelsme01] publish_address {172.22.2.81:9200}, bound_addresses {172.22.2.81:9200} [2019-05-31T11:58:47,054][INFO ][o.e.n.Node ] [dxproelsme01] started [2019-05-31T11:58:47,281][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [dxproelsme01] Failed to clear cache for realms [[]] [2019-05-31T11:58:47,353][INFO ][o.e.l.LicenseService ] [dxproelsme01] license [7dd8f6c0-60ef-4ea5-bae1-c4fcf3e6aa9a] mode [basic] - valid [2019-05-31T11:58:47,369][INFO ][o.e.g.GatewayService ] [dxproelsme01] recovered [0] indices into cluster_state

Are you using Linux? Is selinux=disabled?
Check here: /etc/sysconfig/selinux

Yes RedHat.

selinux is disabled.

Can you share the full output of GET / from all three nodes? I.e. something like these three commands:

curl http://172.22.2.81:9200/
curl http://172.22.2.82:9200/
curl http://172.22.2.83:9200/
root@dxproelsme03:~# curl http://172.22.2.81:9200
{
  "name" : "dxproelsme01",
  "cluster_name" : "dxproelsme",
  "cluster_uuid" : "Ki2hYWPYTH-XL3Fm6sIKxg",
  "version" : {
    "number" : "7.1.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "606a173",
    "build_date" : "2019-05-16T00:43:15.323135Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
root@dxproelsme03:~# curl http://172.22.2.82:9200
{
  "name" : "dxproelsme02",
  "cluster_name" : "dxproelsme",
  "cluster_uuid" : "6CCZsxyYSJylB9P_kujFlA",
  "version" : {
    "number" : "7.1.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "606a173",
    "build_date" : "2019-05-16T00:43:15.323135Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
root@dxproelsme03:~# curl http://172.22.2.83:9200
{
  "name" : "dxproelsme03",
  "cluster_name" : "dxproelsme",
  "cluster_uuid" : "sRZoKEEDQZywbRU3qb8L0Q",
  "version" : {
    "number" : "7.1.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "606a173",
    "build_date" : "2019-05-16T00:43:15.323135Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Thanks. The three nodes all have distinct cluster UUIDs:

  "cluster_uuid" : "Ki2hYWPYTH-XL3Fm6sIKxg",
  "cluster_uuid" : "6CCZsxyYSJylB9P_kujFlA",
  "cluster_uuid" : "sRZoKEEDQZywbRU3qb8L0Q",

This is probably because the first time they started up you were not using the config you quote above, and so they each formed a one-node cluster. You cannot merge separate clusters together. Instead you will need to start again from the beginning: wipe their data directories and restart them, and they should form into a single cluster this time.

2 Likes

Someone start all node before the configuration.

I delete data dir and now it's ok.

Thanks.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.