What triggers a snapshot repository cleanup?

I have been taking a daily snapshot using Curator for the last couple of weeks, using curator to target the last 7 days worth of indices. As this is an incremental backup, I wasn't expecting the size of the snapshot repo to change too much over time, but this morning I'm having all sorts of issue due to the shared file system being full. Even though I only have 2 current snapshots stored which would go back 8 days, in the snapshot repository, the indices directory has indices that are several weeks old.
I did have another cluster pointing to to this repo and its possible that this cluster had a 'test' snapshot targeting data in these older indices, but I have since deleted that snapshot.
What I need to know now is how can I recover space on my repo shared file system so that I can take a new snapshot this morning? Is it safe to just delete the directories relating to the old indices in the repository, or is there some way I can trigger Elasticsearch to to go and remove the data it no longer needs?

If the old snapshots are deleted then ES handles everything so don't delete files.

From https://www.elastic.co/guide/en/elasticsearch/reference/5.2/modules-snapshots.html#_snapshot

When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted snapshot and not used by any other snapshots.


The problem was the snapshots had consumed all the available space on the destination machine (25TB) and were causing us issues. I tried deleting snaphots, but it wasn't freeing off space. I was left with 2 snapshots which should've only gone back 8 days, but there were indices in the snapshot folder going back over a month. In the end I had to delete them manually and hope for the best.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.