We're running a large 0.90.10 cluster. Due to performance problems we're
seeing with has_parent queries (unrelated issue), upgrading to 1.x is not
an option for us at the current time.
We're trying to figure out how to backup 0.90. The following link gives
some ideas for doing
Essentially, the process is
- Stop indexes from being flushed to disk.
- Stop shard reallocation.
- Copy the data.
- Resume index flushing.
- Resume shard reallocation.
The concern is step 3. We have a lot of data to backup. So, step 3 could
take a longer amount of time than we'd want to keep indexes from flushing
However, given that segments in the data directory are immutable, I'm
wondering if we could change step 3 to first create a parallel directory
structure off to the side somewhere and then to hard link all the files in
the data directory into the equivalent directories in the parallel
structure. Running through the files and directories and creating hard
links should be sub-second.
Then, we can resume index flushing and shard reallocation, backup the
parallel directory structure (waiting however long that takes), and finally
delete the parallel directory structure.
This approach is similar to how backups work in Solr.
Will that approach work, or are there any files in the data directory that
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e5503355-a5a5-40cc-adda-8f638568a76e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.