We have about 30 node elasticsearch cluster. We often run in to out of
memory issue and when I look in the JVM memory which is usually around
75-80% which is 24-25 gigs of 30 gigs heap. The filter cache and field
cache add to up 5GB on a node, I'm trying to understand whats the other
15-20GB in the heap. We have 10% caps for filter and field cache. What
could be the other 15-20Gb in the heap ? We are on es 1.4.2 are there are
any know memory leaks ? Couldn't these be the objects waiting for garbage
collection ? If not how would I know ?
How many shards and segments do you have? I believe both shards and
segments require memory, for segments merging them can reduce the memory
footprint.
Are you graphing your heap usage? I think what's useful is looking at the
max(min(heap)) over a few days, assuming you're using the 75% oldgen
threshold you'll see the heap usage reduce significantly when it runs.
That baseline memory usage is useful to know.
On Monday, April 6, 2015 at 12:13:17 AM UTC-4, Abhishek Andhavarapu wrote:
Hi,
We have about 30 node elasticsearch cluster. We often run in to out of
memory issue and when I look in the JVM memory which is usually around
75-80% which is 24-25 gigs of 30 gigs heap. The filter cache and field
cache add to up 5GB on a node, I'm trying to understand whats the other
15-20GB in the heap. We have 10% caps for filter and field cache. What
could be the other 15-20Gb in the heap ? We are on es 1.4.2 are there are
any know memory leaks ? Couldn't these be the objects waiting for garbage
collection ? If not how would I know ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.