We want to introduce snapshot and, in case of failure, restore to our ES setup.
In each of our clusters we have over a few TBs of primary data (*1-2 depending on replication factor). We experiment with google cloud storage in S3 compatibility mode as a repository, but the recovery rates are abysmally slow. Looks like we are being throttled by them.
I wonder if you do snapshot and restore on large data sets. What is the size of your data? Which repository do you use? What rates do you have? Are you happy with the solution? Just share your experience
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.