The amount of heap a node needs depends on a lot of factors beyond the shard count, e.g. amount of data held, type of data and mappings, load and queries the cluster is subject to. I would recommend installing monitoring and look at this to see if you have enough heap.
Total shards we have 2142 and we have 3 data node each node have 714 shards heap size is 10gb of each node. it consuming 6.5gb heap out of 10gb
Total Indices size is 638gb out of 1.2tb we know that a node with 30gb heap size can handle 600 shards but we have 10gb heap size how much it can handle
If your total data size is 638GB, you have an average shard size of less than 300MB, which is very small and inefficient. The guideline on shard count relative heap size is a maximum aimed at reduce the risk for oversharding, which is one of the most common problems when dealing with time series data. The general rule of thumb is that you should aim to have a maximum of 20 shards per GB of heap, which in your case would mean a maximum of 600 shards across the cluster. Note that the limit is not a recommended number of shards nor a level any cluster is guaranteed to be able to handle. In a properly configured cluster I would expect the shard count to be considerably smaller than the maximum as the recommended shard size is often between 10GB and 50GB when dealing with time series data.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.