We are looking for a way to move old indices to a different datacenter, still using the same cluster.
All new data will go to specific "hot" nodes (using shard allocation awareness), and each day an old index will be moved to other "cold" nodes (Again, using shard allocation awareness).
No new data will be indexed in the cold data nodes.
I understand its possible to have split brain situations in this scenario, but i don't mind the cold nodes to be "detached" for some time from the cluster, as long as its not affecting indexing/searching in the "hot" nodes.
When the connection comes back between the cluster they should be able to fully restore the cluster health right?
If its only for the duration of the migration of the data to another datacenter than yeah, i'm willing to restart it. (I believe its not going to fail every few seconds...)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.