On a host with 128GB RAM will Elasticsearch running inside a docker container only use a total 40GB or will the memory mapped files try to take up as much a space as possible?
And if so should we tell docker to force the container's memory to double of the Xmx setting?
When setting -Xms20480m -Xmx20480m the amount of heap assigned will be 20G and not 40G. Also, you should not set above 30G due to performance reasons (see https://www.elastic.co/blog/a-heap-of-trouble)
Lastly, keep in mind that Elasticsearch will use all available RAM. It will use OS-level memory cache to cache index files, which greatly improves performance.
Sorry for slight confusion. I am aware of the Elastic recommended mem settings. Im running Elastic with Mesos/Marathon and the assuption was that my Marathon settings where set to schedule a 40GB container at double what I have the -Xmx20G. But it doesn't unless we use the Mesos containerizer. Anyways...
I was curious how the JVM and Elastic behave inside docker regarding memory and mmap files. Since the container isn't restricted memory wise then the contaimer actually has 128GB available to it from the host.
So we have to consider that other containers includding other Elastic nodes like my coordinators are scheduled on that host. So it is possible that the containers will contend for available RAM.
The host has 128GB and 2GB swap which is pointless (need to revise the cloud image lol) and with mem_lock settings Elastic should be ok and not swapped out.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.