Hi,
I am using elasticsearch 1.0.0 & having a cluster of 7 nodes. (3 master, 2
data & 2 client nodes) The problem i am facing that on data nodes .hprof files are being
generated which is huge in size : datanode1: around 9gb & datanode2:
around 3gb
While in logs of data nodes these lines are appearing frequently:
On the other side, i have never got OutOfMemoryException ony any node.
On Tuesday, 10 June 2014 11:32:36 UTC+5:30, Bharvi Dixit wrote:
Hi,
I am using elasticsearch 1.0.0 & having a cluster of 7 nodes. (3 master, 2
data & 2 client nodes) The problem i am facing that on data nodes .hprof files are being
generated which is huge in size : datanode1: around 9gb & datanode2:
around 3gb
While in logs of data nodes these lines are appearing frequently:
Elasticsearch ships withs -XX:+HeapDumpOnOutOfMemory enabled, this leads to the heap dumps being generated whenever the JVM encounters an OutOfMemoryError. The Elasticsearch codebase is rife with blocks that catch Throwable and swallow it; this means that you might not see every instance of OutOfMemoryError in your logs. And an uncaught OutOfMemoryError will not bring down Elasticsearch anyway. This is changing though as Elasticsearch 5.0.0 will stop catching throwable and there is an open PR to die on OutOfMemoryError.
If you want to stop the JVM from dumping the heap when it encounters an OutOfMemoryError, you need to remove the flag -XX:+HeapDumpOnOutOfMemoryError from the startup parameters passed to the JVM. Keep in mind though that you really want to diagnose why Elasticsearch is running out of memory, and heap dumps might be useful for doing that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.