I've locked the memory with mlockall, changed it in systemd but Elasticsearch, and set the memorylock in elasticsearch.yml Yet somehow still manages to allocate 145 GB of virtual memory. Out of our physical 32 GB of ram, we've allocated 27 GB of it to Elasticsearch alone via jvm.options using -Xmx and -Xms
I don't know what's causing it to allow virtual memory.
We have disabled swapping, we checked our drives and swap wasn't there anymore. We followed everything on the documentation as well.
When we do _nodes?filter_path=**.mlockall it tells us mlockall is true.
You can ignore the 145 GB virtual memory, it's never used. Note that 64bit Linux has 16 Exabyte of virtual memory available. It's a side effect of Java 8 threads with Linux glibc malloc arenas, which is nothing much to worry about. Java 8 allocates metaspace and Linux reserves in advance more memory per thread for performance reasons, but in practice, it is not used at all.
Note, reserving 27GB of 32GB RAM (84%) to ES JVM heap is rather unusual and may have an impact on performance. This leaves only 5GB for the file system cache and other processes and is far from the recommended 50%.
Generally the recommendation is that the JVM heap should get 50% of physical memory, NOT to exceed 32GB. This leaves room for Lucene to use the OS file system cache. You can read more here... Heap: Sizing and Swapping
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.