I have a single node and multiple disks where data gets saved.
I want to replace one of the disk.
I think I can do something like this
deploy a temporary node , replicate data, delete the data entry for the disk I want to remove and restart, let the data replicate and then remove temporary node.
I want to know if there is any correct way of doing it? like we have for excluding nodes from shard allocation.
Couldn't you start the second node with a specific path.data that doesn't include that disk, then use shard filtering to move data to it, then shutdown the original node, replace the disk, then bring the other node up and reverse things?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.