We are using Elasticsearch 7.2.0 We are trying to create a snapshot for one of our indices. The index size is 30 GB. It took around 1 hour to create this snapshot. There were no previously created snapshots.
Our elasticsearch cluster has following configuration:
No of elasticsearch nods - 3 (All of them are master as well as data nodes)
CPU - 6 cores
RAM - 23 GB
We are using azure cloud storage.
Can someone tell if this is how much it usually takes for 30 GB of data? If not, are there any index/cluster settings which can be tweaked to improve this situation?
You can speed up snapshots by increasing the size of the snapshot thread pool and increasing the network rate limit on snapshots from its default (40MB/s).
The network rate limit is increased by increasing the value of the repository setting
max_snapshot_bytes_per_sec as documented here.
The snapshot thread pool size you increase by setting a larger size than the default (
3 in your case) that ES will chose in your case (half the cores). You can do so by adding the following to your
elasticsearch.yml as documented here:
30 is just an example, you can experiment a little here.
I would also point out that depending on the number of shards and segments in those shards that you have, upgrading to a newer version of ES might bring about huge improvements as well since the snapshot taking was optimized a lot recently and
7.5+ in particular has seen some non-trivial improvements in many cases.
Also note, the first snapshot is always the slowest to take. Subsequent snapshots will reuse the data in the first one and be incremental on top of it. Often times the first, depending on the data in your cluster and how quickly it changes, the first snapshot can take as long as you're describing while subsequent snapshots finish in under a minute consistently.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.