Elasticsearch Snapshot repository size estimates

I have a size question concerning the Elasticsearch [Snapshot/Restore] API capabilities

Let’s, say I am only backing up one index. An example call to the snapshot API is shown below for an index called myindex.

    response = requests.put("http://localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"
            "indices": "myindex"

Are there any known metrics for index size vs snapshot repository size if we are only ever snapshotting one index like above? Let’s say for our discussion that the total size of all shards in a cluster is 50 GB. Does anyone have an idea for how to guestimate how much storage would be needed for such a snaphot repository?

I think it is pretty straight forward. same size as index

We should add that I have the compress: True setting when registering the repository.

But still doesn't the repo hold a history of your index overtime, or at least for each snapshot? Say you are snapshotting/backing it up several times a day? Can the snapshot repo still equal the size of the index being backed up? It can do this without ever taking up much more space than your index?

Lets add a time element to the question. Lets say you have an index (same description from original post, plus registered with compression on) that grows from 50GB to 75GB over a period of a year. But lets also say we are snapshotting 3 times a day. At the end of that year, would I really only be using about 75 GB to store this?

And if there are no metrics that have been published then I'm really just asking for what people would guess based on experience. Is it really the case that with compression, and whatever Elastic does under the covers that this shouldn't be much bigger than the index size?

PS: Thanks to elasticforme for the quick answer!

I have 15 index and I do daily backup. and size is pretty much same as index. I do not do compression.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.