I am currently trying to find out how memory works in elasticsearch.
I am using BigDesk to monitor the heap space of my nodes, and it seems to
me that the heap memory only start to be freed when it is almost all used
up. Is that normal?
It seems to cause OOM errors when making big queries .
When the node has just started, they work fine (a bit lenghty, but the
cache effect hasn't kicked in yet, so nothing special).
But when several other queries have been made, I start getting OOM errors
that ultimately lead to a restart of the cluster.
The heap memory does not seem to clear with enough efficiency to make room
for the new query. Does that seem logical? Is there a setting somewhere to
define when to clear the memory? Is there a way to force-clear the heap
memory?
The flush API doesn't seem to do anything in that regard.
you miss a lot of details: how much memory do you have for elasticsearch,
how many docs/index size/..., what queries are you running etc
to be freed when it is almost all used up. Is that normal?
yes, that is the java way
Peter.
On Monday, December 10, 2012 10:54:33 AM UTC+1, DH wrote:
Hi, everyone.
I am currently trying to find out how memory works in elasticsearch.
I am using BigDesk to monitor the heap space of my nodes, and it seems to
me that the heap memory only start to be freed when it is almost all used
up. Is that normal?
It seems to cause OOM errors when making big queries .
When the node has just started, they work fine (a bit lenghty, but the
cache effect hasn't kicked in yet, so nothing special).
But when several other queries have been made, I start getting OOM errors
that ultimately lead to a restart of the cluster.
The heap memory does not seem to clear with enough efficiency to make room
for the new query. Does that seem logical? Is there a setting somewhere to
define when to clear the memory? Is there a way to force-clear the heap
memory?
The flush API doesn't seem to do anything in that regard.
On Monday, December 10, 2012 4:54:33 AM UTC-5, DH wrote:
Hi, everyone.
I am currently trying to find out how memory works in elasticsearch.
I am using BigDesk to monitor the heap space of my nodes, and it seems to
me that the heap memory only start to be freed when it is almost all used
up. Is that normal?
That's typical, but you can control it with a few knobs like -XX:MinHeapFreeRatio,
-XX:SurvivorRation, -XX:TargetSurvivorRatio, etc.
It seems to cause OOM errors when making big queries .
When the node has just started, they work fine (a bit lenghty, but the
cache effect hasn't kicked in yet, so nothing special).
But when several other queries have been made, I start getting OOM errors
that ultimately lead to a restart of the cluster.
The heap memory does not seem to clear with enough efficiency to make room
for the new query. Does that seem logical? Is there a setting somewhere to
define when to clear the memory? Is there a way to force-clear the heap
memory?
The flush API doesn't seem to do anything in that regard.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.