Snapshot Delete not working as expected

I have been running hourly snapshots without periodically deleting older snapshots for a while using curator. The snapshots are stored in Amazon s3. And the number of snapshots have grown to a large number. I tried cleaning it up now and it has become next to impossible to delete snapshots now. It takes 40 minutes to delete a single snapshot.
And the delete snapshot also hinders the currently running create snapshot as well. So it is kind of a deadlock situation unfortunately. In order to recover from it, I registered a new repo and started taking snapshots with a completely different name and now I am also deleting the snapshots that are older than 24 hours in the new repo.
The create snapshot kicks off and finishes successfully (even though it takes 40 minutes). However the delete snapshot always fails with read timeout error. I am only deleting snapshots older than 24 hours. There should not be more than a few snapshots to delete. I have tried bumping up the timeout_override to 1200 and that seems to help.
My questions:

  • When I do a _cat snapshots for the new repo, it still shows the old snapshots from the old repo. And I am suspecting my new create and delete snapshot actions are still referring to the old snapshots. Is there a way to completely disassociate these older snapshots from the new repo?
  • Is it safe to delete the old repository? Will that also delete the old snapshots or at least mark them invalid or something?
  • I need a way to clean start the snapshot process. What is it that I can do to start off with a clean slate?
    Thanks for any and all help. I seem to be struggling with this for a while now and appreciate any pointers/help.

bump, anyone have any thoughts on this one? Working with Pri trying to get to the bottom of this.

Don't re-use the same bucket and/or path. Move the existing snapshot data and metadata elsewhere.

Thanks for the reply.
Are you referring to the path inside the s3 bucket?
And when I register a new repo in Elasticsearch that points to a brand new s3 bucket and/or a new path, then Elasticsearch should allow me to start clean. Is my assumption right?


Yes to both. I recommend a new bucket, if possible. At the very least, a completely separate filesystem path, not nested below the original repository path.

Thanks! Will try that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.