ES 6.8.3
I understand from reading up through forum that ES will use more than actually the Heap Max. On our systems we see that if its given 32Gig of Max Heap. It slowly keeps increasing and consumes a lot more than 32Gig.
So Is there a way to limit the maximum RES memory ES will use? Are there any breakers to configure its behaviour? If we do limit via Cgroup mechanism, it will cause of OOM killing of the process.
I believe the circuit breakers control how to manage in-heap allocations. But trying to see how the system memory usage beyond heap can be limited.
This isn't quite the full story because Elasticsearch also uses a reasonable amount of direct memory too (plus other minor overheads). Direct memory is also bounded as a function of the configured maximum heap size. In general you should expect the Elasticsearch process to hold at most twice its configured maximum heap size.
FYI, there's -XX:MaxRAM option. I've found it vital to run any java under docker, and wondering why there's no much fame for it in community. Some explanations can be found around https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/. Note, I haven't ever check ES's startup .sh and don't know whether it controls -XX:MaxRAM at all.
@DavidTurner Thank you. Are there any graceful knobs to limit this? On our setups I cannot definitively say, but before all broke, we noticed ES was using 75Gig of RES with configured at 32Gig.
Even if there are no clear ceiling methods, is there a document that describes best practices to limit its growth?
@Mikhail_Khludnev would using that cause ES to go Out of Memory and crash? if it will get killed due to out of memory then there will be other issues to deal with.
I suspect the 75GB RES figure you saw also includes memory-mapped files. This isn't really "used" memory in any meaningful sense since those pages are disk-backed and can be dropped on demand.
Not quite, because AIUI the RES figure also includes disk-backed pages from memory-mapped files. There's no need to limit the amount of disk-backed pages since they are a cache and are dropped on demand.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.