Is there a way to limit RES memory used by ES

ES 6.8.3
I understand from reading up through forum that ES will use more than actually the Heap Max. On our systems we see that if its given 32Gig of Max Heap. It slowly keeps increasing and consumes a lot more than 32Gig.
So Is there a way to limit the maximum RES memory ES will use? Are there any breakers to configure its behaviour? If we do limit via Cgroup mechanism, it will cause of OOM killing of the process.

I believe the circuit breakers control how to manage in-heap allocations. But trying to see how the system memory usage beyond heap can be limited.

Thanks
Rajesh.

Elasticsearch will only every use the heap you allocate.

Anything else is managed by the OS, so you will need to consult the documentation for it on how to manage this.

This isn't quite the full story because Elasticsearch also uses a reasonable amount of direct memory too (plus other minor overheads). Direct memory is also bounded as a function of the configured maximum heap size. In general you should expect the Elasticsearch process to hold at most twice its configured maximum heap size.

1 Like

Ahh, thanks for clarifying :slight_smile:

FYI, there's -XX:MaxRAM option. I've found it vital to run any java under docker, and wondering why there's no much fame for it in community. Some explanations can be found around https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/. Note, I haven't ever check ES's startup .sh and don't know whether it controls -XX:MaxRAM at all.

@DavidTurner Thank you. Are there any graceful knobs to limit this? On our setups I cannot definitively say, but before all broke, we noticed ES was using 75Gig of RES with configured at 32Gig.

Even if there are no clear ceiling methods, is there a document that describes best practices to limit its growth?

ES version : 6.8.3

@Mikhail_Khludnev would using that cause ES to go Out of Memory and crash? if it will get killed due to out of memory then there will be other issues to deal with.

Yes, it's documented in the reference manual here, which also suggests that 32GB of heap is normally too much to get the benefits of compressed oops.

I suspect the 75GB RES figure you saw also includes memory-mapped files. This isn't really "used" memory in any meaningful sense since those pages are disk-backed and can be dropped on demand.

In that page I only see Heap documentation.But I understand it as follows:

  1. RES memory will be twice that of the MAX Heap configured i.e, -Xmx
  2. Configure heap lesser than 32Gig (26-28 Gig) so we can take advantage of the compressed OOPS.

Is that fair.

Thanks
Rajesh.

Not quite, because AIUI the RES figure also includes disk-backed pages from memory-mapped files. There's no need to limit the amount of disk-backed pages since they are a cache and are dropped on demand.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.