Controlling Elastic memory inside docker

I am running Elastic 6.2.3 in a Docker container
But do not understand how the RAM used by the process goes way beyond the limits set

Container is configured

"Memory": 34225520640,
"CpusetMems": "",
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 34225520640,
"MemorySwappiness": null,

Elastic JVM is configured

      ES_JAVA_OPTS=-Xms26112m -Xmx26112m

And then elasticsearch.yml is configured

     bootstrap.memory_lock: true
     indices.fielddata.cache.size: 50GB

When I do 'top' on the host/machine (linux)
I have seen Virtual Mem all the way up to 75 Gig

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
128908 elastic+ 20 0 68.1g 27.2g 651776 S 151.8 27.6 606:17.50 java

How can I

  1. Calculate the max memory Elastic is using
  2. Control / Set a max limit to ensure it does not steel memory from other processes I want to run on the same machine

Such a high VIRT usage doesn't mean that Elasticsearch is consuming a lot of memory, just that it is consuming address space, which is a side-effect of the fact that we open some files via mmap. The actual memory usage is RES, which is in line with the JVM options that you configured. See http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html if you want more information about this.

1 Like

Thanx for answering
The oom_killer is shooting down the docker Elastic running inside
and looking at dmesg, see below, I draw the conclusion that it kills it NOT because of the 28 Gigs it is using for the JVM but for the virtual space
or at least that is my current thinking
what is your take on this ?
I wonder if going for NIO instead of mmap would make life better ?
-Tobias

[Feb22 05:32] java invoked oom-killer: gfp_mask=0x50, order=0, oom_score_adj=0
....
[ +0.000003] Task in
/docker/120ed3942b7ea7aa984fbaeb0033573cc49f62ad89c539ef4eae7106992bfe6c killed as a result of limit of /docker/120ed3942b7ea7aa984fbaeb0033573cc49f62ad89c539ef4eae7106992bfe6c
[ +0.000002] memory: usage 33423356kB, limit 33423360kB, failcnt 6794709
[ +0.000001] memory+swap: usage 33423360kB, limit 33423360kB, failcnt 424118
[ +0.000002] kmem: usage 5530848kB, limit 9007199254740988kB, failcnt 0
[ +0.000001] Memory cgroup stats for /docker/120ed3942b7ea7aa984fbaeb0033573cc49f62ad89c539ef4eae7106992bfe6c: cache:2356KB rss:27890152KB rss_huge:27711488KB mapped_file:144KB swap:4KB inactive_anon:279480KB active_anon:240780KB inactive_file:1140KB active_file:1072KB unevictable:27370032KB
[ +0.000009] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ +0.000140] [72588] 64981 72588 9325318 6980834 16006 1 0 java
[ +0.000007] Memory cgroup out of memory: Kill process 49164 (java) score 837 or sacrifice child
[ +0.001314] Killed process 72588 (java) total-vm:37301272kB, anon-rss:27886596kB, file-rss:36740kB, shmem-rss:0kB

Sorry I'm not very familiar with Docker and the OOM killer. Elasticsearch allocates some additional directly to the operating system in addition to the memory that is given to the JVM. Maybe try to configure a higher limit to see whether this still fails?

OK Appreciate you answering , thanx !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.