*edit
So we can copy the \elastic\data\nodes\ folder manually from one machine to another after shutting down, other sources confirm that this should work but is unsupproted. But this is a 1-off copy. Can we continuously do this? (put Elastic on SSD for speed and parse each indice till completion, then copy that indice over to HDD because my SSD drive does not have enough space for everything).
Specifically, I am worried there might be a clash in the UUID.
My apologies, I have edited that statement in the original post. But I'm sure you understand my predicament.
Would it work, in theory?
I understand there are other ways to export data, but it doesn't make sense to do the parsing on SSD, then export it to HDD (since then I may as well do the parsing on HDD due to the time it takes)
Not unless you ran separate nodes and shut them both down each time you did the copy.
But if you were going to do that, then you'd be better off running 2 nodes and then using allocation filtering to do it all online.
Answer: create a snapshot like Elastic and co tells you to, it's really quick and doesn't take much more time than an unsupported lazy copy-paste of the indice.
Haven't tested the restore yet but no reason why it won't work.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.