I've set up a server, started a ES 6.8 instance, run it for a while in cluster and found an error in disk setup requiring me to reinstall everything starting from OS.
As far as I can understand, I have only one or two options - backup everything and reinstall or somehow remove node from cluster, wait for all shards to move to active nodes, remove ES data and than backup/restore much less data.
First option will result in a long downtime of a node; the second should take more time overall, but no downtime and less mine time.
Is it possible to safely remove a node from cluster? Or should I first add a replica to every shard, or is it impossible at all?
I do know about replicas, but until now I was able to run only two nodes, both loaded enough without replicas. The main purpose of the third node was need for replicas.
I'll try excluding right now.
Tried, nothing meaningful happened, I've ended up stopping cluster, backup and recover. Current index was (and is) clogged anyway and I don't understand yet what to do with it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.