Indexes cannot be referenced after cluster configuration

I am creating an Elasticsearch cluster.
I am deleting the Elasticsearch node configuration that was already running on a single node and configuring a cluster with two newly created Elasticsearch nodes.

Here, the cluster is built, but the health of the existing index is now red and cannot be referenced from kibana.
How can I recover them?

What version are you running?

1 Like

Elasticsearch and Kibana are the same version.

Can you elaborate more on what you did here?

Stopped existing Elasticsearch and removed directories under nodes.

# systemctl stop elasticsearch
# rm -rf /var/lib/elasticsearch/nodes/

Then, after starting the new Elasticsearch to be added, we started Elasticsearch again.

# systemctl start elasticsearch

The cluster configuration seems to be working.

# curl -XGET "XXX.XXX.XXX.XXX:9200/_cat/nodes?v"
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
XXX.XXX.XXX.XXX 14 78 1 0.00 0.00 0.00 cdfhilmrstw - ELSTEST-02
XXX.XXX.XXX.XXX 31 97 1 0.00 0.02 0.05 cdfhilmrstw * ELSTEST-01

Ok if you have done this;

Then it's not surprising that this happens;

As the index data no longer exists.

Oh. That's the bad news.
What is the correct procedure to configure a cluster while retaining the existing index?

Take a snapshot and then restore it.

Completely wipe the node by deleting the contents of its data folder.

Does this not imply the above operation?

Does the snapshot need as much disk space as the current index?
Is it difficult to run Elasticsearch while it is running?

Yes, but you also need to make note of;

Once an Elasticsearch node has joined an existing cluster, or bootstrapped a new cluster, it will not join a different cluster.

Yes, nope!

Yes, I am. I have often struggled with this issue.
For example, I had the following problems

#tail -n 100 /var/log/elasticsearch/elasticsearch.log | grep WARN
... snip ...
[2022-07-13T16:42:12,171][WARN ][o.e.c.c.ClusterFormationFailureHelper] [ELSTEST-02] master not discovered or elected yet, an election requires a node with id [9DzP6zJnQbevgRgYmpGUug], have only discovered non-quorum [{ELSTEST-02}{fcux0tenQhSC-tuDyJRz_w}{EKW7ln-2RNCO39oRXiQ_uA}{ELSTEST-02}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{cdfhilmrstw}]; discovery will continue using [] from hosts providers and [{ELSTEST-02}{fcux0tenQhSC-tuDyJRz_w}{EKW7ln-2RNCO39oRXiQ_uA}{ELSTEST-02}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{cdfhilmrstw}] from last-known cluster state; node term 20, last-accepted version 756 in term 20

To solve this, I decided to delete the node directory, but I did not realize that this operation would break the index.

Is there any way to add a new node without breaking the existing node and index?

Hmmm, I'm not sure how Elasticsearch behaves.
Somehow it worked this time.

I started another Elasticsearch with a cluster configuration while I was running a single node Elasticsearch, and then restarted the existing Elasticsearch.

This time I was able to create a cluster configuration without breaking the indexes that the existing Elasticsearch was collecting.

This is my ideal cluster extension.
Is this a correct operation?