Incremental backup


#1

Our cluster has not been backup since the beginning of the year. I ran the API to backup all the indices as listed below it took 19 hours, lots of data. I am planning to run the backup on a daily basis going forward. How would I make sure it does not backup the same indices over and over and taking lots of disk space? 2nd question, does the snapshots names need to be unique when I ran backup on daily basis on can I use the same name eg. snapshot_abc_2015

curl -XPUT "localhost:9200/_snapshot/s3_repository/snapshot_abc_2015?wait_for_completion=true" -d '{
"indices": "abc-2015.*",
"include_global_state": false }'


(Nik Everett) #2

The backups automatically share files where possible - and since the indexes are made of up readonly files for the most part backing up the same index twice is fine. If you modify the index then the files will be different but you'll be able to restore to either one. When you remove the old backup any files that are no longer referenced will be removed as well.

Unique I believe. You'll want to be able to keep the last N or something.


#3

Thanks a lot for the reply


(system) #4