Virtual Memory Issue

I've locked the memory with mlockall, changed it in systemd but Elasticsearch, and set the memorylock in elasticsearch.yml Yet somehow still manages to allocate 145 GB of virtual memory. Out of our physical 32 GB of ram, we've allocated 27 GB of it to Elasticsearch alone via jvm.options using -Xmx and -Xms

I don't know what's causing it to allow virtual memory.

We have disabled swapping, we checked our drives and swap wasn't there anymore. We followed everything on the documentation as well.

When we do _nodes?filter_path=**.mlockall it tells us mlockall is true.

Thank you in advance

mlockall just does the JVM heap, the virtual memory is handled by the OS.

Yes,

But the swap for the server is turned off.

check this out

Hi,

I've read it over and ran

ulimit -l unlimited

and started elastic again and it's still using 145 GB of virtual memory.

Thank You

I also ran the mlock check again

root@server:~# curl -X GET "localhost:9200/_nodes?filter_path=**.mlockall&pretty"
{
  "nodes" : {
"ldx1RanBTyuE5FPKY660Dw" : {
  "process" : {
    "mlockall" : true
  }
}
  }
}

Picture of our processes through htop

The 145GB is not allocated, it's in column VIRT.

The column RSS of 27,5GB is allocated and locked.

You can ignore the 145 GB virtual memory, it's never used. Note that 64bit Linux has 16 Exabyte of virtual memory available. It's a side effect of Java 8 threads with Linux glibc malloc arenas, which is nothing much to worry about. Java 8 allocates metaspace and Linux reserves in advance more memory per thread for performance reasons, but in practice, it is not used at all.

Note, reserving 27GB of 32GB RAM (84%) to ES JVM heap is rather unusual and may have an impact on performance. This leaves only 5GB for the file system cache and other processes and is far from the recommended 50%.

1 Like

For optimal performance what should the memory allocation be?

We are calculating about 3 billion records in total

Generally the recommendation is that the JVM heap should get 50% of physical memory, NOT to exceed 32GB. This leaves room for Lucene to use the OS file system cache. You can read more here... Heap: Sizing and Swapping

Rob

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.