I'm testing the automatic heap size feature in 7.16.2. In a 12G VM I was specifying a 4G heap, I removed that setting, restarted Elasticsearch and it used 1G for heap. That seams small to me.
Does it grow the heap if it needs to or is this set at startup?
I am not a professionnal about Elastic Stack but from my experience 1G seems a bit small to me too.
Every time I set de maximum heap possible (50% of my RAM without exceeding 31GB) and personally I didn't use automatic / default configuration. However, it depend on your node needs, I think the role of the node is quite important to know how configure the size heap.
Do you ave any issues with 1G of heap ? What is the role of your node ?
I let you with the following link that may help you.
Does this mean anything?
/usr/share/Elasticsearch/jdk/bin/java --version
openjdk 17.0.1 2021-10-19
OpenJDK Runtime Environment Temurin-17.0.1+12 (build 17.0.1+12)
OpenJDK 64-Bit Server VM Temurin-17.0.1+12 (build 17.0.1+12, mixed mode, sharing)
Sorry Len I don't have any other ideas apart from the question about the JVM version. It could be a bug, and IMO it's also a bug that there seems to be no way to trace how the heap is being calculated. I suggest opening a Github issue on the subject.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.