Elasticsearch taking all the memory of the system


I have a dedicated data nodes with 32 gb each ( 3 nodes ) which i recently upgraded from 16 GB as the JVM was taking all the memory. But the system again is hogging up on memory.

I have also set the Xms ad Xmx value to 10GB with memory lock set to true.

How can i prevent elasticsearch from taking all the system memory.


What else do you want to use that memory for if not for Elasticsearch? Elasticsearch will happily consume all the memory in the system: the portion that you allocate to the JVM heap, and the remainder for memory mapping files and leveraging the filesystem cache.

I am not worried that elasticseach is taking all my memory. But it comes to an extent where there is just 100MB of memory left for overall system. I believe that is not good. An application should not eat up all your memory just that it wants to perform better making the rest of system inaccessible. This results in making my cluster down and have to manually restart it to make it working again. All I want to know is how to make system accessible all the time and not worry on freeing the system memory manually.

This is not accurate. The filesystem cache and memory mapped files will be freed if the system needs more physical memory. Therefore, the system should not be inaccessible.

Additionally, for server applications like Elasticsearch the presumption is that the application will have full access to system resources and that if the system operator wants something different, they will setup restrictions doing so (e.g., putting the application in a cgroup that limits resource usage).

This is not expected. What information can you provide regarding this (e.g., logs)?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.