We have one of the clusters over which we see high heap usages around 80% levels over a couple of nodes. The cluster has around 10 nodes in total, but heap is seen high only on couple of nodes. All the nodes have a total RAM of 64G and heap set to 30G levels. The total shard count over these nodes is in between 700-750 shards, which lies within the recommended 25 shards per 1GB heap.
What does the nodes stats API give for the nodes with high heap usage?
Please also note that the 25 shards per GB of heap is a guideline about the maximum, but not necessarily recommended, value. Depending on your mappings and shard size, the optimal count for your use case may very well be lower.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.