This node is locked

Hello
I updated elasticsearch to version 8.4.1
Now im trying to start elasticsearch using discovery.type: single-node in config file. But its does not starting. I can see status activating in systemctl status. In elasticsearch.log i can see this message:

Summary

[2023-01-12T18:23:23,740][WARN ][o.e.c.c.ClusterBootstrapService] [node1] this node is locked into cluster UUID [nwFy-ImJTfy0iYUpOF32gh] but [cluster.initial_master_nodes] is set to [node1]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts

cluster.initial_master_nodes commented in elasticsearch.yml
How can i fix this warning? How can i unlock node in cluster?

Welcome to our community! :smiley:

Was this part of a multi-node cluster originally?

in the single-node, there is no need to config cluster.initial_master_nodes, you can remove cluster.initial_master_nodes and try again.

1 Like

Yes, its single node
cluster.initial_master_nodes removed.
Does not helped. Still activating in systemctl status and i see warning in log

This message means that Elasticsearch definitely sees cluster.initial_master_nodes in its settings, and the only way to fix the warning is to remove the setting. Are you sure you're editing the right config file? Or maybe you're passing the setting in on the command line instead?

I edit /etc/elasticsearch/elasticsearch.yml
The line with cluster.initial_master_nodes commented out in this file. I tried just remove this line. Its not helped.
What does it means: "this node is locked into cluster UUID" ? How unlock node into cluster?

David, could you please explain what does it means: "this node is locked into cluster UUID" and how unlock node into cluster?

Nodes can only join one cluster - after it's joined a cluster, it cannot join a different one. It is "locked in" to that specific cluster. There is no way to unlock it, apart from just deleting the contents of its data path (which wipes all the data it contains).

Thank you for answer, David.
Is it possible to create new node in same server and transfer there data from broken node to the new one?

The log message you quote does not mean the node is broken.

Is any way to fix this warning without losing data?
The line with cluster.initial_master_nodes commented out in config file. Its not helped.

I'm not sure what to suggest beyond my previous messages:

and