Node Discovery Elasticsearch 7.0.0

Having a problem joining nodes into a cluster. The two I'm using to troubleshoot are 10.21.5.14 and 10.21.5.15. They seem to join the same cluster_uuid, but don't recognize each other as nodes. Netstats are run - both port 9200 and 9300 are listening. Any ideas? config files at the end.

curl -X GET "10.21.5.14:9200/_cluster/health?pretty"

    {
      "cluster_name" : "Production",
      "status" : "yellow",
      "timed_out" : false,
      "number_of_nodes" : 1,
      "number_of_data_nodes" : 1,
      "active_primary_shards" : 33,
      "active_shards" : 33,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 26,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 55.932203389830505
    }

curl -X GET "10.21.5.14:9200/"`

    {
      "name" : "Log1",
      "cluster_name" : "Production",
      "cluster_uuid" : "Lbq1LvwERjK_ki9GMsdIuA",
      "version" : {
        "number" : "7.0.0",
        "build_flavor" : "default",
        "build_type" : "rpm",
        "build_hash" : "b7e28a7",
        "build_date" : "2019-04-05T22:55:32.697037Z",
        "build_snapshot" : false,
        "lucene_version" : "8.0.0",
        "minimum_wire_compatibility_version" : "6.7.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }

curl -X GET "10.21.5.15:9200/_cluster/health?pretty"`

        {
          "cluster_name" : "Production",
          "status" : "yellow",
          "timed_out" : false,
          "number_of_nodes" : 1,
          "number_of_data_nodes" : 1,
          "active_primary_shards" : 30,
          "active_shards" : 30,
          "relocating_shards" : 0,
          "initializing_shards" : 0,
          "unassigned_shards" : 25,
          "delayed_unassigned_shards" : 0,
          "number_of_pending_tasks" : 0,
          "number_of_in_flight_fetch" : 0,
          "task_max_waiting_in_queue_millis" : 0,
          "active_shards_percent_as_number" : 54.54545454545454
        }

curl -X GET "10.21.5.15:9200/"

{
  "name" : "Log2",
  "cluster_name" : "Production",
  "cluster_uuid" : "Lbq1LvwERjK_ki9GMsdIuA",
  "version" : {
    "number" : "7.0.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "b7e28a7",
    "build_date" : "2019-04-05T22:55:32.697037Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.7.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Node1 config.yml

cluster.name: Production
node.name: Log1
node.attr.rack: r1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 10.21.5.14
http.port: 9200

discovery.zen.ping.unicast.hosts: [10.21.5.14, 10.21.5.15]
discovery.zen.minimum_master_nodes: 2
cluster.initial_master_nodes:
  - 10.21.5.14
  - 10.21.5.15

Node2 config.yml

cluster.name: Production
node.name: Log2
node.attr.rack: r1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 10.21.5.15
http.port: 9200

discovery.zen.ping.unicast.hosts: [10.21.5.14, 10.21.5.15]
discovery.zen.minimum_master_nodes: 2
cluster.initial_master_nodes:
  - 10.21.5.14
  - 10.21.5.15

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

image

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Could you share the logs of both nodes?
Also check that you can actually connect from one node to the other using 9300 port.

Sorry about that. I will be sure to format code going forward.

I can successfully telnet from node1 to node2 on port 9300, and vice versa.

To isolate the problem within the logs, I stopped the service on both nodes, wiped the Production.logs in /var/log/elasticsearch on both nodes, restarted elasticsearch on node 1, waited for it to node 1 to stabilize, and then started node 2 - hoping to catch the discovery attempt.

Logs from Node 2 are below:

2016-05-09T10:06:30,073][INFO ][o.e.e.NodeEnvironment    ] [Log2] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/mapper/elasticsearch-esdata)]], net usable_space [249.9gb], net total_space [249.9gb], types [xfs]
[2016-05-09T10:06:30,078][INFO ][o.e.e.NodeEnvironment    ] [Log2] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-05-09T10:06:30,150][INFO ][o.e.n.Node               ] [Log2] node name [Log2], node ID [o_9gbcK8T_OhzcexvXzrmA]
[2016-05-09T10:06:30,151][INFO ][o.e.n.Node               ] [Log2] version[7.0.0], pid[55955], build[default/rpm/b7e28a7/2019-04-05T22:55:32.697037Z], OS[Linux/3.10.0-957.1.3.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12/12+33]
[2016-05-09T10:06:30,152][INFO ][o.e.n.Node               ] [Log2] JVM home [/usr/share/elasticsearch/jdk]
[2016-05-09T10:06:30,152][INFO ][o.e.n.Node               ] [Log2] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-13385793591961575515, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Dio.netty.allocator.type=unpooled, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2016-05-09T10:06:32,228][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [aggs-matrix-stats]
[2016-05-09T10:06:32,228][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [analysis-common]
[2016-05-09T10:06:32,229][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [ingest-common]
[2016-05-09T10:06:32,229][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [ingest-geoip]
[2016-05-09T10:06:32,229][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [ingest-user-agent]
[2016-05-09T10:06:32,230][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [lang-expression]
[2016-05-09T10:06:32,230][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [lang-mustache]
[2016-05-09T10:06:32,230][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [lang-painless]
[2016-05-09T10:06:32,230][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [mapper-extras]
[2016-05-09T10:06:32,231][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [parent-join]
[2016-05-09T10:06:32,231][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [percolator]
[2016-05-09T10:06:32,231][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [rank-eval]
[2016-05-09T10:06:32,232][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [reindex]
[2016-05-09T10:06:32,232][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [repository-url]
[2016-05-09T10:06:32,232][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [transport-netty4]
[2016-05-09T10:06:32,233][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-ccr]
[2016-05-09T10:06:32,233][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-core]
[2016-05-09T10:06:32,233][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-deprecation]
[2016-05-09T10:06:32,233][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-graph]
[2016-05-09T10:06:32,234][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-ilm]
[2016-05-09T10:06:32,234][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-logstash]
[2016-05-09T10:06:32,234][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-ml]
[2016-05-09T10:06:32,235][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-monitoring]
[2016-05-09T10:06:32,235][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-rollup]
[2016-05-09T10:06:32,235][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-security]
[2016-05-09T10:06:32,235][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-sql]
[2016-05-09T10:06:32,236][INFO ][o.e.p.PluginsService     ] [Log2] loaded module [x-pack-watcher]
[2016-05-09T10:06:32,236][INFO ][o.e.p.PluginsService     ] [Log2] no plugins loaded
[2016-05-09T10:06:38,213][INFO ][o.e.x.s.a.s.FileRolesStore] [Log2] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2016-05-09T10:06:38,956][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [Log2] [controller/56035] [Main.cc@109] controller (64 bit): Version 7.0.0 (Build cdaa022645f38d) Copyright (c) 2019 Elasticsearch BV
[2016-05-09T10:06:39,438][DEBUG][o.e.a.ActionModule       ] [Log2] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2016-05-09T10:06:40,036][INFO ][o.e.d.DiscoveryModule    ] [Log2] using discovery type [zen] and seed hosts providers [settings]
[2016-05-09T10:06:41,011][INFO ][o.e.n.Node               ] [Log2] initialized
[2016-05-09T10:06:41,012][INFO ][o.e.n.Node               ] [Log2] starting ...
[2016-05-09T10:06:41,200][INFO ][o.e.t.TransportService   ] [Log2] publish_address {10.21.5.15:9300}, bound_addresses {10.21.5.15:9300}
[2016-05-09T10:06:41,210][INFO ][o.e.b.BootstrapChecks    ] [Log2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2016-05-09T10:06:41,471][INFO ][o.e.c.s.MasterService    ] [Log2] elected-as-master ([1] nodes joined)[{Log2}{o_9gbcK8T_OhzcexvXzrmA}{3SYuty2tRHCmv1TMEOnJtg}{10.21.5.15}{10.21.5.15:9300}{ml.machine_memory=4122521600, rack=r1, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 18, version: 349, reason: master node changed {previous [], current [{Log2}{o_9gbcK8T_OhzcexvXzrmA}{3SYuty2tRHCmv1TMEOnJtg}{10.21.5.15}{10.21.5.15:9300}{ml.machine_memory=4122521600, rack=r1, xpack.installed=true, ml.max_open_jobs=20}]}

Note: the system date/time is intentional. I am creating a forensics training environment.

Can you share the same logs from the other node as well?

Node1 logs are below as requested.

2016-05-09T10:02:43,362][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [ingest-geoip]
[2016-05-09T10:02:43,362][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [ingest-user-agent]
[2016-05-09T10:02:43,362][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [lang-expression]
[2016-05-09T10:02:43,363][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [lang-mustache]
[2016-05-09T10:02:43,363][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [lang-painless]
[2016-05-09T10:02:43,363][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [mapper-extras]
[2016-05-09T10:02:43,364][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [parent-join]
[2016-05-09T10:02:43,364][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [percolator]
[2016-05-09T10:02:43,364][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [rank-eval]
[2016-05-09T10:02:43,364][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [reindex]
[2016-05-09T10:02:43,365][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [repository-url]
[2016-05-09T10:02:43,365][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [transport-netty4]
[2016-05-09T10:02:43,365][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-ccr]
[2016-05-09T10:02:43,366][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-core]
[2016-05-09T10:02:43,366][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-deprecation]
[2016-05-09T10:02:43,366][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-graph]
[2016-05-09T10:02:43,366][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-ilm]
[2016-05-09T10:02:43,367][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-logstash]
[2016-05-09T10:02:43,367][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-ml]
[2016-05-09T10:02:43,367][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-monitoring]
[2016-05-09T10:02:43,368][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-rollup]
[2016-05-09T10:02:43,368][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-security]
[2016-05-09T10:02:43,368][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-sql]
[2016-05-09T10:02:43,369][INFO ][o.e.p.PluginsService     ] [Log1] loaded module [x-pack-watcher]
[2016-05-09T10:02:43,369][INFO ][o.e.p.PluginsService     ] [Log1] no plugins loaded
[2016-05-09T10:02:49,213][INFO ][o.e.x.s.a.s.FileRolesStore] [Log1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2016-05-09T10:02:49,937][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [Log1] [controller/58646] [Main.cc@109] controller (64 bit): Version 7.0.0 (Build cdaa022645f38d) Copyright (c) 2019 Elasticsearch BV
[2016-05-09T10:02:50,435][DEBUG][o.e.a.ActionModule       ] [Log1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2016-05-09T10:02:51,006][INFO ][o.e.d.DiscoveryModule    ] [Log1] using discovery type [zen] and seed hosts providers [settings]
[2016-05-09T10:02:51,976][INFO ][o.e.n.Node               ] [Log1] initialized
[2016-05-09T10:02:51,976][INFO ][o.e.n.Node               ] [Log1] starting ...
[2016-05-09T10:02:52,138][INFO ][o.e.t.TransportService   ] [Log1] publish_address {10.21.5.14:9300}, bound_addresses {10.21.5.14:9300}
[2016-05-09T10:02:52,148][INFO ][o.e.b.BootstrapChecks    ] [Log1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2016-05-09T10:02:52,327][INFO ][o.e.c.s.MasterService    ] [Log1] elected-as-master ([1] nodes joined)[{Log1}{o_9gbcK8T_OhzcexvXzrmA}{GGIx-zD2TduZACcl1adoHA}{10.21.5.14}{10.21.5.14:9300}{ml.machine_memory=4122521600, rack=r1, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 25, version: 674, reason: master node changed {previous [], current [{Log1}{o_9gbcK8T_OhzcexvXzrmA}{GGIx-zD2TduZACcl1adoHA}{10.21.5.14}{10.21.5.14:9300}{ml.machine_memory=4122521600, rack=r1, xpack.installed=true, ml.max_open_jobs=20}]}
[2016-05-09T10:02:52,527][INFO ][o.e.c.s.ClusterApplierService] [Log1] master node changed {previous [], current [{Log1}{o_9gbcK8T_OhzcexvXzrmA}{GGIx-zD2TduZACcl1adoHA}{10.21.5.14}{10.21.5.14:9300}{ml.machine_memory=4122521600, rack=r1, xpack.installed=true, ml.max_open_jobs=20}]}, term: 25, version: 674, reason: Publication{term=25, version=674}
[2016-05-09T10:02:52,619][INFO ][o.e.h.AbstractHttpServerTransport] [Log1] publish_address {10.21.5.14:9200}, bound_addresses {10.21.5.14:9200}
[2016-05-09T10:02:52,620][INFO ][o.e.n.Node               ] [Log1] started
[2016-05-09T10:02:52,898][INFO ][o.e.c.s.ClusterSettings  ] [Log1] updating [xpack.monitoring.collection.enabled] from [false] to [true]
[2016-05-09T10:02:53,215][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [Log1] Failed to clear cache for realms [[]]
[2016-05-09T10:02:53,284][INFO ][o.e.l.LicenseService     ] [Log1] license [d1eb0712-0116-46db-b6cf-e218b3076e53] mode [basic] - valid
[2016-05-09T10:02:53,298][INFO ][o.e.g.GatewayService     ] [Log1] recovered [15] indices into cluster_state
[2016-05-09T10:02:53,458][WARN ][r.suppressed             ] [Log1] path: /.kibana/_doc/space%3Adefault, params: {index=.kibana, id=space:default}

I upgraded from 6.7.1 to 7.0.1 and I had same problem. turn out that you need this inyour elasticsearch.yml file first time

cluster.initial_master_nodes: ( this host name should be same as your node.name in same file.

That doesn't look like the whole log from Log1 (it should start with two messages from o.e.e.NodeEnvironment), and the missing lines were what I was actually after :slight_smile:

However I think @elasticforme is right, you started these nodes without cluster.initial_master_nodes the first time, so they formed two separate clusters, and you can't merge clusters together. Since this is just an experiment, I'd start again by wiping the data paths for both nodes.

You are right, I must have missed the first few when copying. Below are the missing lines from Log1.

2016-05-09T10:02:41,156][INFO ][o.e.e.NodeEnvironment    ] [Log1] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/mapper/elasticsearch-esdata)]], net usable_space [249.4gb], net total_space [249.9gb], types [xfs]
[2016-05-09T10:02:41,161][INFO ][o.e.e.NodeEnvironment    ] [Log1] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-05-09T10:02:41,252][INFO ][o.e.n.Node               ] [Log1] node name [Log1], node ID [o_9gbcK8T_OhzcexvXzrmA]
[2016-05-09T10:02:41,253][INFO ][o.e.n.Node               ] [Log1] version[7.0.0], pid[58566], build[default/rpm/b7e28a7/2019-04-05T22:55:32.697037Z], OS[Linux/3.10.0-957.1.3.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12/12+33]
[2016-05-09T10:02:41,253][INFO ][o.e.n.Node               ] [Log1] JVM home [/usr/share/elasticsearch/jdk]
[2016-05-09T10:02:41,253][INFO ][o.e.n.Node               ] [Log1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-6689379447873679552, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Dio.netty.allocator.type=unpooled, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm, -Des.bundled_jdk=true]

David I don't think you have to do anything.
what I did is
on node1
cluster.initial_master_nodes:node1

node2 -> cluster.initial_master_nodes:node1
node3 -> cluster.initial_master_nodes:node1

started elasticsearch on node1, wait for min and started on node2,node3 and all came up ok.

then remove that lines from all systems. and restarted whole cluster and all worked.

Ok, there's something stranger going on here:

[2016-05-09T10:02:41,252][INFO ][o.e.n.Node               ] [Log1] node name [Log1], node ID [o_9gbcK8T_OhzcexvXzrmA]
[2016-05-09T10:06:30,150][INFO ][o.e.n.Node               ] [Log2] node name [Log2], node ID [o_9gbcK8T_OhzcexvXzrmA]

These two nodes have the same node ID, which means you probably cloned the data directory. That probably won't do what you want, and definitely prevents these nodes from joining each other. It's probably safest to wipe everything and start again.

@elasticforme what you say is true assuming you've never started these nodes before, but that's not what's happening here. These nodes have already started up and each formed a cluster, so they will be ignoring cluster.initial_master_nodes.

David, I've attempted just setting the cluster.initial_master_nodes: to Log1 for both instances and sequentially restarting them. They remained as independent clusters as you indicated it would. I have also removed elasticsearch from both machines, reinstalled, and reconfigured with the same results. How can I be sure to get unique node IDs?

Wipe their data directories. I'm not sure that a reinstall would do this, you might have to do it by hand.

Had to do it by hand. Worked like a charm. Thanks for the help!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.