How to limit the Memory used by ElasticSearch


#1

I am using ElasticSearch 1.6.0 on Windows 2008 R2 server. I want to limit the Memory used by the ElasticSearch servers. I set the following and start the server
set ES_HEAP_SIZE=4g

The windows resource Monitor shows that the java process use 4GB commited memory. I then perform some search and retrieve of 7 millions documents, then the Memory usage of the java process become
Commited: 4GB (no change)
Private: 4GB
Sharable: 9GB
Working set: 13GB

Why does the ES JVM used 9GB of sharable Memory? This cause my machine to be 100% memory full and it become very slow.

How can I limit the Sharable memory used by ElasticSearch?

Thanks
Ningjun


(Nemo) #2

Did you try to set -Xms and -Xmx for ES process? This limits the heap size.


#3

The environment variable ES_HEAP_SIZE achieve the same thing as -Xms and -Xmx. The heap size is 4GB but the ES use a lot more "Sharable" memory than 4GB. I believe it is Lucene which greedly eat up all available kernel memory no matter what JAVA heap size is.


(Nemo) #4

Can I see what your "ps -ef | grep java" looks like?


(Mark Walkom) #5

This is Lucene caching things. Windows may have some method of restricting the overall memory of a process, but that's not really something we can help with.

Don't be worried though, if other processes need memory then they can get it, it's not going to starve them just to keep things cached.

Won't work on Windows.


(Nemo) #6

My bad! I did not notice that it's windows server!


#7

Thanks for your reply. It does starve the windows because spark UI server running on the same machine is responding slowly due to lack of memory.

I am surprised that there is no config setting to limit the cache memory size of Lucene? This means Lucene will greedily eat up all memory and nobody can stop it?


(Magnus Bäck) #8

I am surprised that there is no config setting to limit the cache memory size of Lucene? This means Lucene will greedily eat up all memory and nobody can stop it?

Lucene just maps files into memory and has little control over how much of it is made part of the process's working set (i.e. the part of the process's address space that's in RAM). See http://blogs.technet.com/b/clinth/archive/2012/10/11/can-a-process-be-limited-on-how-much-physical-memory-it-uses.aspx for some more background. It mentions a tool that can be used to limit the maximum working set size of a process—perhaps it's useful to you?


(system) #9