[08:45:38,311][WARN ][http.netty ] [Portal] Caught exception
while handling client http traffic, closing connection [id: 0xbfe4192a, /IP-redacted:53149
=> /IP-redacted:9200]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.(
HeapChannelBuffer.java:42)
at org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer. (
BigEndianHeapChannelBuffer.java:34)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(
ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(
HeapChannelBufferFactory.java:69)
at org.elasticsearch.common.netty.buffer.CompositeChannelBuffer.copy(
CompositeChannelBuffer.java:568)
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.copy(
AbstractChannelBuffer.java:522)
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.
appendToCumulation(HttpChunkAggregator.java:207)
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.
messageReceived(HttpChunkAggregator.java:174)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:793)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:455)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.
callDecode(ReplayingDecoder.java:538)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.
messageReceived(ReplayingDecoder.java:437)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:793)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(
OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:560)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:84)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
processSelectedKeys(AbstractNioWorker.java:471)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(
AbstractNioWorker.java:332)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker
.java:35)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(
DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor
.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:908)
at java.lang.Thread.run(Thread.java:662)
Any ideas? It is running out of heap during http transport?
Why is it that elasticsearch will basically just lock up after an error
like this - isn't there a way to rescue from exceptions and just move on?
Unfortunately you can't really recover from OOM. You really need to restart
the node. The question here is why do you run OOM, the StackTrace is only
the place where it really hit the OOM but likely not where the memory goes.
can you provide more details like your usage pattern ie. if you do faceting
/ sorting etc. and your settings for Xmx etc.
simon
On Friday, October 12, 2012 9:51:46 PM UTC+2, courtenay wrote:
Just got this out of memory error:
[08:45:38,311][WARN ][http.netty ] [Portal] Caughtexception
while handling client http traffic, closing connection [id: 0xbfe4192a, /IP-redacted:53149
=> /IP-redacted:9200]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.(
HeapChannelBuffer.java:42)
at org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.
(BigEndianHeapChannelBuffer.java:34)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(
ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.
getBuffer(HeapChannelBufferFactory.java:69)
at org.elasticsearch.common.netty.buffer.CompositeChannelBuffer.copy(
CompositeChannelBuffer.java:568)
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.copy(
AbstractChannelBuffer.java:522)
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.
appendToCumulation(HttpChunkAggregator.java:207)
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.
messageReceived(HttpChunkAggregator.java:174)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:793)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:455)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.
callDecode(ReplayingDecoder.java:538)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.
messageReceived(ReplayingDecoder.java:437)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:793)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(
OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:560)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:84)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
processSelectedKeys(AbstractNioWorker.java:471)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run
(AbstractNioWorker.java:332)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:35)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(
DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:908)
at java.lang.Thread.run(Thread.java:662)
Any ideas? It is running out of heap during http transport?
Why is it that elasticsearch will basically just lock up after an error
like this - isn't there a way to rescue from exceptions and just move on?
Hi,
Looks like there is a memory leak in one of the following class/method.(stream/buffer memory resource not closed)
Please review if all the streams/buffer memory resources are closed in the following classes/methods.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.