Snapshot

Hi team, is there any sizing guidelines for snapshots and restore? If I wanted take snapshot of upto 10 TB of indexing data. Is there any limitations on the number of CPU and memory for better performance?

Better performance of the snapshot you mean?

We are planning to leverage NEST API capabilities to perform snapshot and restore by writing our own commands. Our application is on-prem software runs on window machine. Snapshot and restore will be done by executing the commands on the same machine where Elasticsearch process runs for searching and indexing. We are planning to recommend 16 cores cpu and 32 GB ram for application running along with Elasticsearch process.

On a daily basis assume some GB of data is getting ingested to Elasticsearch and snapshot will be taken incrementally. I wanted to know whether above CPU and memory sizing would fulfill the snapshot and restore needs?

Snapshots would be mostly IO bound, rather than CPU or memory.

1 Like

I would need to know the reason, why does Elasticsearch recommend to take a snapshot instead of a file system backup? I don't see any backup vendor support snapshot and restore of Elasticsearch in the market.

I think this is answered in the manual:

WARNING: The only reliable and supported way to back up a cluster is by taking a snapshot . You cannot back up an Elasticsearch cluster by making copies of the data directories of its nodes. There are no supported methods to restore any data from a filesystem-level backup. If you try to restore a cluster from such a backup, it may fail with reports of corruption or missing files or other data inconsistencies, or it may appear to have succeeded having silently lost some of your data.

There's little point in taking backups from which you cannot reliably restore.

You can, however, take a filesystem backup of your snapshot repository.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.