I'm having issues understanding the memory requirements. Hope somebody can shine some lights on this.
Situation:
- 1 single machine with 20GB ram
- Base OS ubuntu 20.04 LTS
- Runs docker containers of Elasticsearch, Logstash, Kibana, NGINX, Jupyter and Neo4j
- Only ES and Neo4j containers have larger memory demands, and I want to give them as much as possible as it would improve speed for those
- I manually appoint the Neo4j docker container 5GB of memory.
- I've read all the Elasticsearch docker memory advise from Install Elasticsearch with Docker | Elasticsearch Guide [7.13] | Elastic
- I set the
vm.max_map_count
, increasedulimit
and disabled swapping per containerbootstrap.memory_lock=true" --ulimit memlock=-1:-1
- I'm aware of the guidance "Set
Xms
andXmx
to no more than 50% of your total memory" from here: Advanced configuration | Elasticsearch Guide [7.13] | Elastic - I've tried letting docker manage the jvm memory (default recommendation from link above).
- Im getting heap out of memory errors that crash Elasticsearch. At the same time Im seeing OOM-killer messages in
dmesg
, so it seems Elasticsearch is consuming more mem than the kernel allows it. - I'm also having an installation with same configuration where this is absolutely no issue at all. This is what surprises me the most.
Questions
I've got two major questions, although any advice around jvm settings may help me.
Question 1: Is the docker auto jvm setting smart enough to understand there are also other containers running that require (significant) memory - neo4j in my case, and will it allocate the correct jvm heap size automatically?
Question 2: If I should I set the jvm heap size manually, Im a bit lost on hoe the 50% rule applies. Should Elasticsearch get 50% of total system memory (10GB or 20GB), or 50% of remaining memory after Elasticsearch, Kibana and especially Neo4j dockers containers?