Sizing question

Elasticsearch recommendations is as follows:
Maximum of 32 GB heap
20 shards per 1 GB of heap
40 to 60 GB of data per shard

this gives approximately 25TB to 39TB of data per node. Is this only theoretical or would it really work?

This is a maximum, not a recommended target. The number will depend on shard size, mappings, index settings and node load.

Note that this recommendation was created for an older version of Elasticsearch and may have changed since.

This depends on what your requirements are. Improvements in recent versions have reduced the amount of memory required per shard, but having lots of data on a node naturally affect query performance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.