Setup Elasticsearch cluster result in two clusers

Hi there, I'm setting up Elasticsearch cluster by using two separate Linux machines.

On the first Linux, elasticsearch.yml:

# ---------------------------------- Cluster -----------------------------------
cluster.name: my-elk
# ------------------------------------ Node ------------------------------------
node.name: lib9-CentOS-01
# ---------------------------------- Network -----------------------------------
network.host: 10.60.37.56
http.port: 9200
# --------------------------------- Discovery ----------------------------------
discovery.seed_hosts: ["10.60.37.56", "10.60.37.57"]
cluster.initial_master_nodes: ["lib9-CentOS-02"]

On the second Linux, elasticsearch.yml:

# ---------------------------------- Cluster -----------------------------------
cluster.name: my-elk
# ------------------------------------ Node ------------------------------------
node.name: lib9-CentOS-02
# ---------------------------------- Network -----------------------------------
network.host: 10.60.37.57
http.port: 9200
# --------------------------------- Discovery ----------------------------------
discovery.seed_hosts: ["10.60.37.56", "10.60.37.57"]
cluster.initial_master_nodes: ["lib9-CentOS-02"]

After start Elasticsearch service on both nodes, the cluser status show below.
It looks each node form a separate cluster, as the cluster_uuid is different.

I also try use below on the .yml on both machines, but same.

cluster.initial_master_nodes: ["lib9-CentOS-01", "lib9-CentOS-02"]

What's wrong with my setting?
Any step I miss to form these two nodes into one same cluster?

  • The first node:
[root@lib9-CentOS-01 elasticsearch]# curl -XGET 'http://10.60.37.56:9200/_cluster/state?pretty'|more
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "cluster_name" : "my-elk",
  "cluster_uuid" : "ZECF4nRVRfKcLkiOMKVgCQ",
  "version" : 61,
  "state_uuid" : "52o-KcqWT6ihRDQ7XylBAQ",
  "master_node" : "X26_cY-jQqO31MSw7sYV9Q",
  "blocks" : { },
  "nodes" : {
    "X26_cY-jQqO31MSw7sYV9Q" : {
      "name" : "lib9-CentOS-01",
      "ephemeral_id" : "8_ehu8qoRXSyarvNpfB3Og",
      "transport_address" : "10.60.37.56:9300",
      "attributes" : {
        "ml.machine_memory" : "8201666560",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "512",
        "ml.max_jvm_size" : "1073741824"
      },
  • The second node:
[root@lib9-CentOS-02 elasticsearch]# curl -XGET 'http://10.60.37.57:9200/_cluster/state?pretty'|more
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "cluster_name" : "my-elk",
  "cluster_uuid" : "7jjgqZRSSveHbUzfgqhRPw",
  "version" : 61,
  "state_uuid" : "tQVTz9CLRlKPRn5CANQ_uA",
  "master_node" : "dMOj9zi4SMqx88olIJ8XBw",
  "blocks" : { },
  "nodes" : {
    "dMOj9zi4SMqx88olIJ8XBw" : {
      "name" : "lib9-CentOS-02",
      "ephemeral_id" : "3EWhBmfiQEOvAIpurpXMaw",
      "transport_address" : "10.60.37.57:9300",
      "attributes" : {
        "ml.machine_memory" : "8201666560",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "512",
        "ml.max_jvm_size" : "1073741824"
      },

I also have added both nodes name to IP address mapping to local /etc/hosts on both machines.

Just to mention, both nodes can communicate to each other on 9200 and 9300 port, so no firewall issue.

OK, I fixed this issue by myself, after some research and test.
The issue is, I enabled the service before setup the cluster.
If you need configure cluster, you shouldn't enable service before configure.
In my case, I should delete the content from the elasticsearch data directory.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.