Quantitative Cluster Sizing

Hi I am looking for Sizing Elastic Search Cluster and came across below session which is very informative.
https://www.elastic.co/webinars/elasticsearch-sizing-and-capacity-planning I have few questions regarding the session -

  1. Cluster Sizing for Volume -> As per session we can have Memory:Storage Ratio of 1:30 for HOT , 1:1000 for Cold etc. so suppose i have 8 gb ram on a cold node then i can store 8000 GB data.
  2. Now if we consider number of shards.It is driven by Heap memory. We can have max 4 gb ram for heap so we can support at max 4* 20 =80 shards per node. One shard can be of max 50 GB so total supported data size = 80 * 50 = 4000 gb.

So 1st point in providing different storage data than 2nd point. Please help me understand.

The ratio mentioned for cold nodes assumes the use of frozen indices as frozen indices use very little heap space which allows very dense nodes.

The fact that a maximum of 20 shards per GB of heap is often recommended does however not mean that a node can hold that many shards irrespective of size. If you use very large shards you are likely to have less than 20 per GB of heap in my experience.

1 Like


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.