How much heap memory need for elasticsearch data nodes

I am runnig 8 node cluster (3 Master+3 data+2 coordinate)

  • data_node 1 contains 714 shards and heap memory is 10gb
  • data_node 2 contains 715 shards and heap memory is 10gb
  • data_node 3 contains 715 shards and heap memory is 10gb

is it sufficient to have 10gb heap memory ?

The amount of heap a node needs depends on a lot of factors beyond the shard count, e.g. amount of data held, type of data and mappings, load and queries the cluster is subject to. I would recommend installing monitoring and look at this to see if you have enough heap.

Total shards we have 2142 and we have 3 data node each node have 714 shards heap size is 10gb of each node. it consuming 6.5gb heap out of 10gb
Total Indices size is 638gb out of 1.2tb we know that a node with 30gb heap size can handle 600 shards but we have 10gb heap size how much it can handle

If your total data size is 638GB, you have an average shard size of less than 300MB, which is very small and inefficient. The guideline on shard count relative heap size is a maximum aimed at reduce the risk for oversharding, which is one of the most common problems when dealing with time series data. The general rule of thumb is that you should aim to have a maximum of 20 shards per GB of heap, which in your case would mean a maximum of 600 shards across the cluster. Note that the limit is not a recommended number of shards nor a level any cluster is guaranteed to be able to handle. In a properly configured cluster I would expect the shard count to be considerably smaller than the maximum as the recommended shard size is often between 10GB and 50GB when dealing with time series data.

1 Like

Thanks for this information

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.