We have several machines with 512 GB of RAM and I wanted to know if we can set JVM heap size for Elasticsearch larger than 32 GB (up to 256 GB).
This page says we should keep the heap size below the threshold for compressed ordinary object pointers (oops).
However, I've found posts like this, which suggest setting JVM heap size beyond 64 GB.
Is that something you recommend to avoid or is it still acceptable? What should I be aware of if we want to use up to 256 GB or RAM?
In general, what do you recommend for large clusters that require a total heap size larger than 3 TB?
We have a limited number of bare metal machines with up to 512 GB of RAM and we currently run multiple Elasticsearch data nodes on each with 31 GB of JVM heap, but sharing I/O between nodes on the same machine can be the bottleneck sometimes.
I understand the load would be consolidated, but is there a reason not to do that?
I believe I/O sharing problem goes away by using a single node on each bare metal machine rather than multiple nodes competing for I/O bandwidth.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.