Replacing hosts in a cluster

Hi, we have a 7.9.1 cluster consisting of 3 hosts and we need to replace it with 3 new servers running ES 7.14-7.15. Would it be enough to just start one new server, wait for it to join the cluster, then shut down one old server, wait for all indexes to become green, then repeat the process for next new & the next old server, etc.? And finally let the client apps know about the new list of servers? Will this simple approach work while maintaining resilience? Efficiency and the caused extra i/o isn't that much important for this one time operation. Thanks.

Yes. Look at Rolling upgrades | Elasticsearch Guide [7.15] | Elastic

Thanks. I know about that guide and I've actually done so before. This time I don't need to upgrade the servers in place, but rather replace each node with a completely new instance running on a completely new server.

You can apply the same process and move the data dir from one machine to the other IMO.

But yeah you can let Elasticsearch do the job for you.


I'm a bit baffled by thinking how to best do this. What if I let one new empty node join the cluster of 3, take down one old node, and then the new server dies suddenly, will it result in data loss because newer (minor) version ES wasn't able to send new data it got to the nodes running older version ES? Too bad. How about I do this: let the client app be aware of all six master-eligible nodes, even before adding the 3 new ones. Then let all 3 new servers running newer version of ES completely join the cluster one after another, resulting in a total of 6. Then start taking down the old servers one by one, waiting for all nodes to converge. And then remove the 3 downed nodes from client app config in the end. If one of the 6 servers, either old or new, dies during the processes, some other nodes would probably be able to provide the missing data. Am I absolutely wrong in my reasoning?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.