Understanding Jvm memory calculation with docker

I'm having issues understanding the memory requirements. Hope somebody can shine some lights on this.


  • 1 single machine with 20GB ram
  • Base OS ubuntu 20.04 LTS
  • Runs docker containers of Elasticsearch, Logstash, Kibana, NGINX, Jupyter and Neo4j
  • Only ES and Neo4j containers have larger memory demands, and I want to give them as much as possible as it would improve speed for those
  • I manually appoint the Neo4j docker container 5GB of memory.
  • I've read all the Elasticsearch docker memory advise from Install Elasticsearch with Docker | Elasticsearch Guide [7.13] | Elastic
  • I set the vm.max_map_count, increased ulimit and disabled swapping per container bootstrap.memory_lock=true" --ulimit memlock=-1:-1
  • I'm aware of the guidance "Set Xms and Xmx to no more than 50% of your total memory" from here: Advanced configuration | Elasticsearch Guide [7.13] | Elastic
  • I've tried letting docker manage the jvm memory (default recommendation from link above).
  • Im getting heap out of memory errors that crash Elasticsearch. At the same time Im seeing OOM-killer messages in dmesg , so it seems Elasticsearch is consuming more mem than the kernel allows it.
  • I'm also having an installation with same configuration where this is absolutely no issue at all. This is what surprises me the most.

I've got two major questions, although any advice around jvm settings may help me.

Question 1: Is the docker auto jvm setting smart enough to understand there are also other containers running that require (significant) memory - neo4j in my case, and will it allocate the correct jvm heap size automatically?

Question 2: If I should I set the jvm heap size manually, Im a bit lost on hoe the 50% rule applies. Should Elasticsearch get 50% of total system memory (10GB or 20GB), or 50% of remaining memory after Elasticsearch, Kibana and especially Neo4j dockers containers?

Welcome to our community! :smiley:

I don't believe the automatic calculations look at what else is running, just the total heap size. (We don't recommend running Elasticsearch alongside multiple other apps like that.)

In your case I would just set the heap manually, make it 6GB and then leave the rest for your other processes and the OS.

This gives enough direction to further investigate. Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.