I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the same
JVM as ES? So they would share the same max heap setting of 50%.
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the same
JVM as ES? So they would share the same max heap setting of 50%.
I see, but I'm running on Windows. Is the behavior similar, or does this
not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
Nik
On Wed, Nov 26, 2014 at 1:53 PM, BradVido <bradyv...@gmail.com
<javascript:>> wrote:
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem
caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the same
JVM as ES? So they would share the same max heap setting of 50%.
I see, but I'm running on Windows. Is the behavior similar, or does this
not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem
caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the same
JVM as ES? So they would share the same max heap setting of 50%.
Indeed the behaviour is the same on Windows and Linux: memory that is not
used by processes is used by the operating system in order to cache the
hottest parts of the file system. The reason why the docs say that the rest
should be left to Lucene is that most disk accesses that elasticsearch
performs are done through Lucene.
On Wed, Nov 26, 2014 at 8:44 PM, Nikolas Everett nik9000@gmail.com wrote:
I imagine all operating systems have some kind of disk caching. I just
happen to be used to linux.
I see, but I'm running on Windows. Is the behavior similar, or does this
not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem
caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the
same JVM as ES? So they would share the same max heap setting of 50%.
Is there anyway to actually see how much memory is used by file system
cache?
A strange problem I see is, my machine has 32GB in total, i gave 16GB to
ES. But very often the total OS memory usage will reach 100%, but task
manager only shows around 12~13GB of the java exe. But when i kill the java
process, memory usage dropped to only 20%. I can't explain why this is
happening. Not sure if it's the file system cache.
On Wednesday, November 26, 2014 at 12:51:11 PM UTC-8, Adrien Grand wrote:
Indeed the behaviour is the same on Windows and Linux: memory that is not
used by processes is used by the operating system in order to cache the
hottest parts of the file system. The reason why the docs say that the rest
should be left to Lucene is that most disk accesses that elasticsearch
performs are done through Lucene.
On Wed, Nov 26, 2014 at 8:44 PM, Nikolas Everett <nik...@gmail.com
<javascript:>> wrote:
I imagine all operating systems have some kind of disk caching. I just
happen to be used to linux.
On Wed, Nov 26, 2014 at 2:42 PM, BradVido <bradyv...@gmail.com
<javascript:>> wrote:
I see, but I'm running on Windows. Is the behavior similar, or does this
not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem
caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the
same JVM as ES? So they would share the same max heap setting of 50%.
Is there anyway to actually see how much memory is used by file system
cache?
A strange problem I see is, my machine has 32GB in total, i gave 16GB to
ES. But very often the total OS memory usage will reach 100%, but task
manager only shows around 12~13GB of the java exe. But when i kill the java
process, memory usage dropped to only 20%. I can't explain why this is
happening. Not sure if it's the file system cache.
On Wednesday, November 26, 2014 at 12:51:11 PM UTC-8, Adrien Grand wrote:
Indeed the behaviour is the same on Windows and Linux: memory that is not
used by processes is used by the operating system in order to cache the
hottest parts of the file system. The reason why the docs say that the rest
should be left to Lucene is that most disk accesses that elasticsearch
performs are done through Lucene.
On Wed, Nov 26, 2014 at 8:44 PM, Nikolas Everett nik...@gmail.com
wrote:
I imagine all operating systems have some kind of disk caching. I just
happen to be used to linux.
I see, but I'm running on Windows. Is the behavior similar, or does
this not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem
caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the
same JVM as ES? So they would share the same max heap setting of 50%.
I used procexp and VMMap to double check, ya, i think they are file system
cache.
Is there anyway to control the size of file system cache? Coz now it's
easily driving up OS memory consumption. When it's reaching 100%, the node
would fail to respond...
On Wednesday, February 25, 2015 at 10:58:10 PM UTC-8, Mark Walkom wrote:
Definitely sounds like FS cache to me.
On 26 February 2015 at 17:14, liu wei <liu.liu...@gmail.com <javascript:>>
wrote:
Is there anyway to actually see how much memory is used by file system
cache?
A strange problem I see is, my machine has 32GB in total, i gave 16GB to
ES. But very often the total OS memory usage will reach 100%, but task
manager only shows around 12~13GB of the java exe. But when i kill the java
process, memory usage dropped to only 20%. I can't explain why this is
happening. Not sure if it's the file system cache.
On Wednesday, November 26, 2014 at 12:51:11 PM UTC-8, Adrien Grand wrote:
Indeed the behaviour is the same on Windows and Linux: memory that is
not used by processes is used by the operating system in order to cache the
hottest parts of the file system. The reason why the docs say that the rest
should be left to Lucene is that most disk accesses that elasticsearch
performs are done through Lucene.
On Wed, Nov 26, 2014 at 8:44 PM, Nikolas Everett nik...@gmail.com
wrote:
I imagine all operating systems have some kind of disk caching. I just
happen to be used to linux.
I see, but I'm running on Windows. Is the behavior similar, or does
this not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett
wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it
mmaps files and then iterates over their content inteligently. That means
most of its actual storage is "off heap" (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want to leave linux a whole bunch of unused memory.
I've read the recommendations for ES_HEAP_SIZE http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem
caching).
But I'm confused on how Lucene uses that. Doesn't Lucene run in the
same JVM as ES? So they would share the same max heap setting of 50%.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.