I have been creating hourly snapshots of our prod ES to AWS s3. And have been running curator to do this task. However now I am setting up a curator action to delete snapshots older than n number of days and it is extremely slow. The repository does have a large number of snapshots. It is deleting like 2 snapshots in 30+ minutes and I have thousands to clean up. Is it possible to speed this up? Once the clean up is done, I am planning to run the clean-up along side the creation so there won't be a large set of snapshots to clean up in the future.
Curator does not employ any method differing from the regular Elasticsearch APIs. It is really just an index and snapshot selecting wrapper around them. With that understanding, know that Elasticsearch is what is taking so long to delete the snapshots, and not Curator.
You described a system with a high number of snapshots. The snapshot delete process can become quite lengthy when there are a large number of indices and segments in the snapshot repository, because each segment selected for deletion must be compared to every other snapshot in the repository to see if there are other snapshots which require the same segment. As the number of segments decreases, the snapshot deletion speed will gradual increase, as the number being compared will correspondingly decrease. There is unfortunately no silver bullet for this. You will simply have to wait it out, and clean it up as quickly as Elasticsearch will permit.
You should also know that the more remote network file systems (
azure) can be slower than a local nfs because of the network round-trip. Since you’re using S3, it will also simply take longer.
Thanks for clarifying. I guess I will have to bite the bullet until this first round of cleanup is done.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.