Is there any change in recommenced settings for maximum heap size (<32GB) since ES now supports Java 11 and G1GC ? I'd be really happy to try with more RAM allocated for heap. Also have question if anybody can point to any performance testing for setups with more than 32GB heap sizes or have personal experience which such setups
The docs on setting the heap size are up to date and reflect the current recommendations; in particular they continue to prescribe setting your heap small enough that it supports compressed oops, and ideally zero-based compressed oops.
Thanks David for reply. I've read doc of course. Just was wondering if there's any difference if Java 8 or 11 is used. Not very familiar with Java especially with memory management. Anyway I'm leaving this question opened as maybe smb write his experience about performance after setting up more than 32GB heap
For our clusters, we use large bare-metal servers with 512GB RAM, running a single ES instance with a 250G heap (we've always been using G1GC - even back when it was not recommended to do so - with no issues). We experimented for a long time with multiple 30G heap instances but ultimately found that running a single instance with a massive heap yielded far better performance than multiple instances. The behaviour of the cluster no doubt depends on your use cases so I would recommend that you run your own tests and make comparisons.
Our various current deployments use either ES5 or ES6 (with Oracle Java 8). We're currently in the process of migrating our products to ES7/Java11, and so far the performance seems to be on par with earlier versions.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.