Heap space error

I am getting out of heap space and invalid data length error. Is this
something I need to change elasticsearch to have bigger heap? I am using
the default. Hadoop client is sending bulk data of Json files which are
around 200k avg size.

[2012-05-18 10:53:08,592][WARN ][transport.netty ] [Portal] Exception
caught on netty layer [[id: 0x73d9024f, /172.18.62.198:60052 => /
172.18.62.202:9300]]

java.lang.OutOfMemoryError: Java heap space

[2012-05-18 10:52:55,976][WARN ][transport.netty ] [Portal] Exception
caught on netty layer [[id: 0x48359a33, /172.18.62.197:50372 => /
172.18.62.202:9300]]

java.io.StreamCorruptedException: invalid data length: -1121714720

at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:131)

at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)

at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)

at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)

at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)

at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)

at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)

at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

You get that on the elasticsearch cluster nodes? You might send a large
bulk request, and you need to either increase teh memory allocated to ES
(see the installation guide), and/or reduce the size of the bulk request.

On Fri, May 18, 2012 at 7:54 PM, Mohit Anchlia mohitanchlia@gmail.comwrote:

I am getting out of heap space and invalid data length error. Is this
something I need to change elasticsearch to have bigger heap? I am using
the default. Hadoop client is sending bulk data of Json files which are
around 200k avg size.

[2012-05-18 10:53:08,592][WARN ][transport.netty ] [Portal] Exception
caught on netty layer [[id: 0x73d9024f, /172.18.62.198:60052 => /
172.18.62.202:9300]]

java.lang.OutOfMemoryError: Java heap space

[2012-05-18 10:52:55,976][WARN ][transport.netty ] [Portal] Exception
caught on netty layer [[id: 0x48359a33, /172.18.62.197:50372 => /
172.18.62.202:9300]]

java.io.StreamCorruptedException: invalid data length: -1121714720

at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:131)

at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)

at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)

at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)

at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)

at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)

at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)

at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

I'm seeing the same problem with 0.19.8.

This is a portion of the logs from our cluster.

[2012-09-27 02:32:36,085][INFO ][monitor.jvm ] [delayed_job2]
[gc][ParNew][382223][2591] duration [1.4s], collections [2]/[4s], total
[1.4s]/[2.4m], memory [239.9mb]->[258mb]/[495.3mb], all_pools {[Code Cache]
[3.4mb]->[3.4mb]/[48mb]}{[Par Eden Space]
[86.8mb]->[23.6kb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [136.3mb]->[258mb]/[345.6mb]}{[CMS
Perm Gen] [23.9mb]->[23.9mb]/[82mb]}
[2012-09-27 02:33:55,001][WARN ][transport.netty ] [delayed_job2]
Exception caught on netty layer [[id: 0x0843249a, /192.168.100.28:59408 =>
/192.168.100.224:9300]]
java.lang.OutOfMemoryError: Java heap space
[2012-09-27 02:33:55,002][WARN ][transport ] [elastic1]
Received response for a request that has timed out, sent [55561ms] ago,
timed out [25561ms] ago, action [discovery/zen/fd/ping], node
[[delayed_job2][5wYnXMqPSRaGxgJreCLtIA][inet[/192.168.100.28:9300]]{client=true,
tag=delayed_job2, data=false, max_local_storage_nodes=1, master=false}], id
[37563308]
[2012-09-27 02:33:55,231][WARN
][netty.channel.socket.nio.AbstractNioWorker] Unexpected exception in the
selector loop.
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.netty.buffer.HeapChannelBuffer.(HeapChannelBuffer.java:42)
at
org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.(BigEndianHeapChannelBuffer.java:34)
at
org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at
org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:69)
at
org.elasticsearch.common.netty.buffer.AbstractChannelBufferFactory.getBuffer(AbstractChannelBufferFactory.java:48)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:81)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
[2012-09-27 02:33:55,232][WARN ][http.netty ] [delayed_job2]
Caught exception while handling client http traffic, closing connection
[id: 0x60613546, /0:0:0:0:0:0:0:1:58816 => /0:0:0:0:0:0:0:1:9200]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.jackson.util.BufferRecycler.balloc(BufferRecycler.java:102)
at
org.elasticsearch.common.jackson.util.BufferRecycler.allocByteBuffer(BufferRecycler.java:57)
at
org.elasticsearch.common.jackson.io.IOContext.allocWriteEncodingBuffer(IOContext.java:144)
at
org.elasticsearch.common.jackson.smile.SmileGenerator.(SmileGenerator.java:292)
at
org.elasticsearch.common.jackson.smile.SmileFactory._createJsonGenerator(SmileFactory.java:364)
at
org.elasticsearch.common.jackson.smile.SmileFactory.createJsonGenerator(SmileFactory.java:275)
at
org.elasticsearch.common.jackson.smile.SmileFactory.createJsonGenerator(SmileFactory.java:263)
at
org.elasticsearch.common.xcontent.smile.SmileXContent.createGenerator(SmileXContent.java:66)
at
org.elasticsearch.common.xcontent.XContentBuilder.(XContentBuilder.java:100)
at
org.elasticsearch.common.xcontent.XContentBuilder.(XContentBuilder.java:91)
at
org.elasticsearch.common.xcontent.XContentBuilder.builder(XContentBuilder.java:71)
at
org.elasticsearch.common.xcontent.smile.SmileXContent.contentBuilder(SmileXContent.java:39)
at
org.elasticsearch.common.xcontent.XContentFactory.contentBuilder(XContentFactory.java:97)
at
org.elasticsearch.search.builder.SearchSourceBuilder.buildAsBytesStream(SearchSourceBuilder.java:552)
at
org.elasticsearch.action.search.SearchRequest.extraSource(SearchRequest.java:361)
at
org.elasticsearch.rest.action.search.RestSearchAction.parseSearchRequest(RestSearchAction.java:131)
at
org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:68)
at
org.elasticsearch.rest.RestController.executeHandler(RestController.java:159)
at
org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:142)
at
org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:120)
at
org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:82)
at
org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:255)
at
org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:43)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:111)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:420)
[2012-09-27 02:33:55,251][WARN ][transport.netty ]
[delayed_job2] Exception caught on netty layer [[id: 0x0843249a,
/192.168.100.28:59408 => /192.168.100.224:9300]]
java.io.StreamCorruptedException: invalid data length: -1029436289
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:139)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

We were only querying, no bulk importing at all. delayed_job2 has the heap
size set to 512mb for a client node, which should be enough, right?

On Monday, 21 May 2012 06:29:14 UTC+10, kimchy wrote:

You get that on the elasticsearch cluster nodes? You might send a large
bulk request, and you need to either increase teh memory allocated to ES
(see the installation guide), and/or reduce the size of the bulk request.

On Fri, May 18, 2012 at 7:54 PM, Mohit Anchlia <mohita...@gmail.com<javascript:>

wrote:

I am getting out of heap space and invalid data length error. Is this
something I need to change elasticsearch to have bigger heap? I am using
the default. Hadoop client is sending bulk data of Json files which are
around 200k avg size.

[2012-05-18 10:53:08,592][WARN ][transport.netty ] [Portal] Exception
caught on netty layer [[id: 0x73d9024f, /172.18.62.198:60052 => /
172.18.62.202:9300]]

java.lang.OutOfMemoryError: Java heap space

[2012-05-18 10:52:55,976][WARN ][transport.netty ] [Portal] Exception
caught on netty layer [[id: 0x48359a33, /172.18.62.197:50372 => /
172.18.62.202:9300]]

java.io.StreamCorruptedException: invalid data length: -1121714720

at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:131)

at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)

at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)

at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)

at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)

at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)

at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)

at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

--