Large size of .hprof files on data nodes

Hi,
I am using elasticsearch 1.0.0 & having a cluster of 7 nodes. (3 master, 2
data & 2 client nodes)
The problem i am facing that on data nodes .hprof files are being
generated which is huge in size
: datanode1: around 9gb & datanode2:
around 3gb

While in logs of data nodes these lines are appearing frequently:

[2014-06-09 00:10:34,442][WARN ][monitor.jvm ] [Server116]
[gc][young][290308][34996] duration [1.1s], collections [1]/[1.5s], total
[1.1s]/[48.1m], memory [3.6gb]->[3.4gb]/[7.9gb], all_pools {[young]
[508.4mb]->[735.3kb]/[532.5mb]}{[survivor]
[66.5mb]->[66.5mb]/[66.5mb]}{[old] [3.1gb]->[3.3gb]/[7.3gb]}
[2014-06-09 00:19:51,533][WARN ][monitor.jvm ] [Server116]
[gc][young][290864][35014] duration [1.3s], collections [1]/[1.8s], total
[1.3s]/[48.2m], memory [4.1gb]->[4.1gb]/[7.9gb], all_pools {[young]
[290.5mb]->[3.1mb]/[532.5mb]}{[survivor] [9.5mb]->[66.5mb]/[66.5mb]}{[old]
[3.8gb]->[4gb]/[7.3gb]}

The cluster contains 35 indexes with 170 gb of data (one shard & one
replica for each index).

Is there anything wrong in java (JVM) or there is any other issue 7 what
should i do to resolve the above issues.?

Thanks in advance,
Bharvi Dixit

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/de973225-e074-4521-bf94-36732fdd78c5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

1 Like

On the other side, i have never got OutOfMemoryException ony any node.

On Tuesday, 10 June 2014 11:32:36 UTC+5:30, Bharvi Dixit wrote:

Hi,
I am using elasticsearch 1.0.0 & having a cluster of 7 nodes. (3 master, 2
data & 2 client nodes)
The problem i am facing that on data nodes .hprof files are being
generated which is huge in size
: datanode1: around 9gb & datanode2:
around 3gb

While in logs of data nodes these lines are appearing frequently:

[2014-06-09 00:10:34,442][WARN ][monitor.jvm ] [Server116]
[gc][young][290308][34996] duration [1.1s], collections [1]/[1.5s], total
[1.1s]/[48.1m], memory [3.6gb]->[3.4gb]/[7.9gb], all_pools {[young]
[508.4mb]->[735.3kb]/[532.5mb]}{[survivor]
[66.5mb]->[66.5mb]/[66.5mb]}{[old] [3.1gb]->[3.3gb]/[7.3gb]}
[2014-06-09 00:19:51,533][WARN ][monitor.jvm ] [Server116]
[gc][young][290864][35014] duration [1.3s], collections [1]/[1.8s], total
[1.3s]/[48.2m], memory [4.1gb]->[4.1gb]/[7.9gb], all_pools {[young]
[290.5mb]->[3.1mb]/[532.5mb]}{[survivor] [9.5mb]->[66.5mb]/[66.5mb]}{[old]
[3.8gb]->[4gb]/[7.3gb]}

The cluster contains 35 indexes with 170 gb of data (one shard & one
replica for each index).

Is there anything wrong in java (JVM) or there is any other issue 7 what
should i do to resolve the above issues.?

Thanks in advance,
Bharvi Dixit

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b9997803-bfc5-4e48-bef1-4b8322f286f6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Did you resolve this issue?

1 Like

Elasticsearch ships withs -XX:+HeapDumpOnOutOfMemory enabled, this leads to the heap dumps being generated whenever the JVM encounters an OutOfMemoryError. The Elasticsearch codebase is rife with blocks that catch Throwable and swallow it; this means that you might not see every instance of OutOfMemoryError in your logs. And an uncaught OutOfMemoryError will not bring down Elasticsearch anyway. This is changing though as Elasticsearch 5.0.0 will stop catching throwable and there is an open PR to die on OutOfMemoryError.

If you want to stop the JVM from dumping the heap when it encounters an OutOfMemoryError, you need to remove the flag -XX:+HeapDumpOnOutOfMemoryError from the startup parameters passed to the JVM. Keep in mind though that you really want to diagnose why Elasticsearch is running out of memory, and heap dumps might be useful for doing that.

2 Likes