I am looking for verification on the optimal JVM Heap size configuration for our production cluster.
We are running on physical nodes with 500 GB of RAM each.
I have read two conflicting pieces of advice regarding heap sizing:
The 50% Rule: Set heap to 50% of available RAM
The 31GB Limit: Never set heap above ~31GB to maintain Compressed OOPs.
My Query:
Is strictly capping the heap at 31GB the correct approach for a 500GB node, or is there any scenario where a larger heap (e.g., 64GB+) is recommended?
Set Xms and Xmx to no more than 50% of the total memory available to each Elasticsearch node. […] The 50% guideline is intended as a safe upper bound on the heap size. You may find that heap sizes smaller than this maximum offer better performance, for instance by allowing your operating system to use a larger filesystem cache.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.