Elasticsearch cluster UUID: _na_

I have multiple elasticsearch nodes, all on seperate machines. For some reason they can't join the cluster.

My first node:

{
      "name" : "myNode_1",
      "cluster_name" : "SRT411",
      "cluster_uuid" : "7lao_UsPTwuB1W5cy467KA",
      "version" : {
        "number" : "7.5.2",
        "build_flavor" : "default",
        "build_type" : "rpm",
        "build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
        "build_date" : "2020-01-15T12:11:52.313576Z",
        "build_snapshot" : false,
        "lucene_version" : "8.3.0",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }

My second node:

> {
>   "name" : "myNode_2",
>   "cluster_name" : "SRT411",
>   "cluster_uuid" : "_na_",
>   "version" : {
>     "number" : "7.5.2",
>     "build_flavor" : "oss",
>     "build_type" : "rpm",
>     "build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
>     "build_date" : "2020-01-15T12:11:52.313576Z",
>     "build_snapshot" : false,
>     "lucene_version" : "8.3.0",
>     "minimum_wire_compatibility_version" : "6.8.0",
>     "minimum_index_compatibility_version" : "6.0.0-beta1"
>   },
>   "tagline" : "You Know, for Search"
> }

When checking the logs on the second node I get this:

[2020-02-10T13:10:07,920][INFO ][o.e.c.c.ClusterBootstrapService] [myNode_2] skipping cluster bootstrapping as local node does not match bootstrap requirements: [192.168.30.11]

[2020-02-10T13:10:17,926][WARN ][o.e.c.c.ClusterFormationFailureHelper] [myNode_2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.30.11] to bootstrap a cluster: have discovered [{myNode_2}{Dk5X6WGcQbmvrESfSkMykw}{52w2p5UGRXKT6bK6y1JmNQ}{192.168.30.12}{192.168.30.12:9300}{dim}]; discovery will continue using [192.168.30.11:9300, 192.168.30.13:9300] from hosts providers and [{myNode_2}{Dk5X6WGcQbmvrESfSkMykw}{52w2p5UGRXKT6bK6y1JmNQ}{192.168.30.12}{192.168.30.12:9300}{dim}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

I have opened the ports on both machines and I can successful use telnet with ports 9200 and 9300.

It looks like some nodes are using the default distribution while others use the OSS distribution. Make sure all nodes in the cluster use the same version and distribution.

I've reinstalled elasticsearch but I noticed my first node says version 7.5.2 and when i install elasticsearch on the other nodes they're on 7.6.0 and it still gives me the same issue. Would the different versions cause this problem?

I can't figure out how to install 7.5.2 on the new nodes.

EDIT: I updated the first node to 7.6.0 and the additional nodes till won't join the cluster. I'm getting the exact same error.

EDIT2: Reinstalling elasticsearch on the VM fixed the issue.

{
  "name" : "myNode_1",
  "cluster_name" : "SRT411",
  "cluster_uuid" : "gx40LIPqQKS7tDW1BAOSAg",
  "version" : {
    "number" : "7.6.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "7f634e9f44834fbc12724506cc1da681b0c3b1e3",
    "build_date" : "2020-02-06T00:09:00.449973Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

{
  "name" : "myNode_2",
  "cluster_name" : "SRT411",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "7.6.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "7f634e9f44834fbc12724506cc1da681b0c3b1e3",
    "build_date" : "2020-02-06T00:09:00.449973Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.