You will need to post your config files and also some logs to help understand what your issue is.
Each Elasticsearch node has a unique node id that can't be changed and it is not exposed by any configuration setting, this node id is created the first time the Elasticsearch runs and join a cluster, this node id is also persisted in the data path specified in the setting path.data
in elasticsearch.yml
.
Since you are using containers, this path.data
needs to be an external volume that will be mounted in your container, it cannot be an ephemeral storage inside the container, but I'm assuming that you are already using external volumes.
I do not use Docker much, but from your description, when your Elasticsearch node container goes down, docker swarm will automatically spin up another container, since this is done in an automated way this seems to be a completely new container, using a differente path.data
, which is in the end a completely new Elasticsearch node.
What will happen after this container is started depends on how your cluster is configured.
If you have at least 3 master node and your indices have replicas, this new container should be able to join the cluster and the data that was in the old container will be recreated in this new one from the replicas.
If you have a 2 node cluster or a single-node cluster, then when your container goes down, your entire cluster will go down and there is no cluster for this new container to join, you need to recover the cluster before anything else, this implies and bringing back the old container or using its data path in a new one.
So, you need to share how your cluster is configured, but from your description, Elasticsearch is working as expected and the issue is related to the way you are deploying it.