Hello,
I use Elasticsearch 8.7.1 in an official Docker container.
I export the data (/usr/share/elasticsearch/data/) to the host to keep indexing data.
If I delete the container (for update for example), when I recreate it with the same configuration parameters, I see the following alert in the logs:
log.level": "WARN", "message":"this node is locked into cluster UUID [zRB-tYxKTzCKz19RMWeTng] but [cluster.initial_master_nodes] is set to [my-elasticsearch-01]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts;
I specify that the "cluster.initial_master_nodes" parameter is not declared, neither in the elasticsearch.yml configuration file, nor in my docker-compose file.
Here are the parameters I include in my docker-compose file:
- discovery.type=single-node
- node.name=my-elasticsearch-01
- cluster.name=my-elastic-cluster
- network.host=0.0.0.0
- xpack.security.enabled=false
- xpack.security.transport.ssl.enabled=false
- xpack.license.self_generated.type=basic
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
I notice that I don't get this alert if I don't export the data to the host or delete the data before recreating the container.
I have the impression that it does not accept my old data.
What is going on ?
How do I get Elasticsearch to accept my old index data?
THANKS !