I use snapshot & restore module for Elasticsearch 2.3 to backup our server every day.
We rotate every 7 backup.
The rotate works, i can verify with this command:
curl -XGET 'localhost:9200/_snapshot/my_backup/_all?pretty'
But old indices are not deleted and my backup size is important.
How can i do? use rm -r on very old indices?
Thanks for the help
Try using Elasticsearch Curator. Version 4.3.1 will work with Elasticsearch v2.
Hi, thank you for your response!
So, if i understand, i still use snapshot & restore module to backup my server, XDELETE to rotate, but i have to add Curator to reduce the backup in my ES repo?
You can use Curator to do all of it. Create a Curator action file that
- performs your snapshot
- deletes indices older than x days
- deletes snapshots older than x days
each in sequence, all in the same configuration.
Ok thank you, if i have no choice, i will use curator.
For now we use Elasticsearch with Graylog, and Graylog delete older indices than 50 days.
So i backup only the last 50 indices.
To be sure:
It is normal if my old indices (in my backup) are not deleted? although i use a curl -s XDELETE ?
And if remove old indices (in my backup) with a rm -r, can i corrupt my backup?
No. This is not normal. If indices persist, it could indicate the behavior characteristics of an overloaded cluster.
Absolutely. You should never interact with the files in your data directory or your snapshot repository in any other way than via the designated API calls.
While I appreciate that you're using the API calls to handle what you're doing, Curator was expressly designed to be an index and snapshot selecting wrapper for those API calls, to enable you to do everything much more easily. I think if you give it a try you'll find it to your liking.
In fine this is ok
Old indices folders are just empty
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.