If I have a 16C64G machine and install CentOS7.6 on it, There is 62gb available memory.
total used free shared buff/cache available
Mem: 62 34 0 0 27 27
Swap: 0 0 0
According to the reference, I have 3 choices to set the jvm heap size:
31.5gb: The largest size I can set If I want to use compressed object pointers (compressed oops)
31gb: half of the system memory and use compressed object pointers (compressed oops)
30gb: The largest size I can set if I want to use zero based compressed oops.
Half of the system, up to just under 32GB. You will want to check in your Elasticsearch logs at startup if it's using compressed pointers if you get near 31-32GB.
Sorry I didn't express it clearly.
I just want to compare zero based compressed oops with compressed oops, both of them(30gb and 31gb) use the compressed object pointers.
I know the threshold value of compressed oops and zero-based compressed oops.
But I don't know which one is better.
Theoretically zero-based compressed oops has better performance than compressed oops, but if I want to use it ,the largest value must be no more than 30gb. 1gb less than 31gb with compressed oops.
The exact threshold varies from JVM to JVM, I don't think 30GB is always low enough but it often is. As long as your JVM reports that it's using zero-based compressed oops, it's all good.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.