OutOfMemory results heapdump or not?

Hi,
I am having three node cluster running in my environment.

i got the below error in all the nodes.

[2017-06-27T06:15:50.391Z], details[node_left[YznXMVjSSq8Kfw]]]], indexUUID [YRBQDOFMlfMW5UnFg], message [failed to perform indices:data/write/bulk[s] on replica on node {datanode-0}{GDyfoFmuJxJg}{10.0.0.X}{10.0.0.X:9300}{max_local_storage_nodes=1, master=true}], failure [RemoteTransportException[[datanode-0][10.0.0.X:9300][indices:data/write/bulk[s][r]]]; nested: OutOfMemoryError[Java heap space]; ]
RemoteTransportException[[datanode-0][10.0.0.8:9300][indices:data/write/bulk[s][r]]]; nested: OutOfMemoryError[Java heap space];
Caused by: java.lang.OutOfMemoryError: Java heap space
	at org.elasticsearch.common.io.stream.StreamInput.readBytesReference(StreamInput.java:95)
	at org.elasticsearch.common.io.stream.StreamInput.readBytesReference(StreamInput.java:84)
	at org.elasticsearch.action.index.IndexRequest.readFrom(IndexRequest.java:697)
	at org.elasticsearch.action.bulk.BulkItemRequest.readFrom(BulkItemRequest.java:104)
	at org.elasticsearch.action.bulk.BulkItemRequest.readBulkItem(BulkItemRequest.java:89)
	at org.elasticsearch.action.bulk.BulkShardRequest.readFrom(BulkShardRequest.java:89)
	at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:222)
	at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:116)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
	at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

when i checked ps -ef | grep elastic in my linux server, it shows

elastic+  1867     1 13 Jul03 ?        08:31:59 /usr/bin/java -Xms1720m -Xmx1720m -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Dmapper.allow_dots_in_name=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.1.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -d -p /var/run/elasticsearch/elasticsearch.pid --default.path.home=/usr/share/elasticsearch --default.path.logs=/var/log/elasticsearch --default.path.data=/var/lib/elasticsearch --default.path.conf=/etc/elasticsearch

I checked all the for the .hprof extension in my server but didnt find any .hprof files created .

Want to know whether above error will create the heap-dump and if it is then where is the location of heapdump?

Thanks

Hi @Narmatha,

based on the JVM arguments that you show, it should created a heap dump. However, there are several reasons why the JVM is unable to create a heapdump.

  • The complete path (including the file name) is longer than 4096 or 4097 characters (depends on the Linux distribution). This does not seem to be the case on your system though because the heapdump should end up in /usr/share/elasticsearch.
  • No more system memory available (malloc fails while creating the heap dump)
  • I/O errors: The current user cannot write a file to the provided directory, another file with the same name is already present (current must have the same PID as a previous one so this is rather unlikely), too little disk space available.

The JVM should tell the reason on stdout (redirected to syslog?).

Btw, you can also explicitly set the heap dump path with -XX:HeapDumpPath=/some/output/directory.

Daniel

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.