Java.lang.OutOfMemoryError: Java heap space


(Alex Wajda) #1

Hi,

I'm running one Node embedded in my Java webapp and when it started
I'm running another node standalone and expect both nodes to be in one
cluster. When 2nd note is up the 1st node failed with
[java.lang.OutOfMemoryError: Java heap space] exception. I'm using
storage in memory, but the index is empty, so it's not because of
that. The error is raised in 100% of cases.

------------- 2nd node ----------------

[awajda@Calypso bin]$ ./elasticsearch -f
[18:51:56,600][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: initializing ...
[18:51:56,622][INFO ][plugins ] [Tyrak] loaded []
[18:52:00,408][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: initialized
[18:52:00,408][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: starting ...
[18:52:00,566][INFO ][transport ] [Tyrak] bound_address
{inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/10.1.7.64:9301]}
[18:52:04,571][WARN ][discovery.zen.ping.unicast] [Tyrak] failed to
send ping to [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: []
[inet[localhost/127.0.0.1:9300]][discovery/zen/unicast]
[18:52:04,588][INFO ][cluster.service ] [Tyrak] new_master
[Tyrak][adc86bf5-de16-4832-9390-13846d35e421][inet[/10.1.7.64:9301]],
reason: zen-disco-join (elected_as_master)
[18:52:04,611][INFO ][discovery ] [Tyrak] IDC-local/
adc86bf5-de16-4832-9390-13846d35e421
[18:52:04,619][INFO ][http ] [Tyrak] bound_address
{inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/10.1.7.64:9201]}
[18:52:04,847][INFO ][jmx ] [Tyrak] bound_address
{service:jmx:rmi:///jndi/rmi://:9400/jmxrmi}, publish_address
{service:jmx:rmi:///jndi/rmi://10.1.7.64:9400/jmxrmi}
[18:52:04,847][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: started

------------- 1st node (at the same time) ----------------

18:52:03,691 WARN [netty] [Astaroth / Asteroth] Exception caught on
netty layer [[id: 0x0043290c, /127.0.0.1:39484 => /127.0.0.1:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:
113)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
49)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
181)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
85)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
754)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
302)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:
317)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:
299)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:
216)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
754)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
51)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
540)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
349)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
281)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
201)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at
org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

Both nodes use the same config (YAML):

cluster.name: IDC-local

discovery.zen.ping.multicast:
    enabled: false

discovery.zen.ping.unicast:
    hosts : ["localhost:9300"]

index.storage.type: memory

Thank you!

BR, Alex


(Shay Banon) #2

Are they using the same version?

On Tue, Sep 21, 2010 at 7:05 PM, Alex Wajda alexander.wajda@gmail.comwrote:

Hi,

I'm running one Node embedded in my Java webapp and when it started
I'm running another node standalone and expect both nodes to be in one
cluster. When 2nd note is up the 1st node failed with
[java.lang.OutOfMemoryError: Java heap space] exception. I'm using
storage in memory, but the index is empty, so it's not because of
that. The error is raised in 100% of cases.

------------- 2nd node ----------------

[awajda@Calypso bin]$ ./elasticsearch -f
[18:51:56,600][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: initializing ...
[18:51:56,622][INFO ][plugins ] [Tyrak] loaded []
[18:52:00,408][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: initialized
[18:52:00,408][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: starting ...
[18:52:00,566][INFO ][transport ] [Tyrak] bound_address
{inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/10.1.7.64:9301]}
[18:52:04,571][WARN ][discovery.zen.ping.unicast] [Tyrak] failed to
send ping to [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: []
[inet[localhost/127.0.0.1:9300]][discovery/zen/unicast]
[18:52:04,588][INFO ][cluster.service ] [Tyrak] new_master
[Tyrak][adc86bf5-de16-4832-9390-13846d35e421][inet[/10.1.7.64:9301]],
reason: zen-disco-join (elected_as_master)
[18:52:04,611][INFO ][discovery ] [Tyrak] IDC-local/
adc86bf5-de16-4832-9390-13846d35e421
[18:52:04,619][INFO ][http ] [Tyrak] bound_address
{inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/10.1.7.64:9201]}
[18:52:04,847][INFO ][jmx ] [Tyrak] bound_address
{service:jmx:rmi:///jndi/rmi://:9400/jmxrmi}, publish_address
{service:jmx:rmi:///jndi/rmi://10.1.7.64:9400/jmxrmi}
[18:52:04,847][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: started

------------- 1st node (at the same time) ----------------

18:52:03,691 WARN [netty] [Astaroth / Asteroth] Exception caught on
netty layer [[id: 0x0043290c, /127.0.0.1:39484 => /127.0.0.1:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:
113)
at

org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
49)
at

org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
181)
at

org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
85)
at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
754)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
302)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:
317)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:
299)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:
216)
at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
754)
at

org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
51)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
540)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
349)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
281)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
201)
at

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at

org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

Both nodes use the same config (YAML):

cluster.name: IDC-local

discovery.zen.ping.multicast:
enabled: false

discovery.zen.ping.unicast:
hosts : ["localhost:9300"]

index.storage.type: memory

Thank you!

BR, Alex


(Alex Wajda) #3

No, they were not. The 2nd node was of the version 0.9.
After upgrading it to version 0.10 the error has gone.
Thank you, Shay!

On Sep 21, 9:48 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Are they using the same version?

On Tue, Sep 21, 2010 at 7:05 PM, Alex Wajda alexander.wa...@gmail.comwrote:

Hi,

I'm running one Node embedded in my Java webapp and when it started
I'm running another node standalone and expect both nodes to be in one
cluster. When 2nd note is up the 1st node failed with
[java.lang.OutOfMemoryError: Java heap space] exception. I'm using
storage in memory, but the index is empty, so it's not because of
that. The error is raised in 100% of cases.

------------- 2nd node ----------------

[awajda@Calypso bin]$ ./elasticsearch -f
[18:51:56,600][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: initializing ...
[18:51:56,622][INFO ][plugins ] [Tyrak] loaded []
[18:52:00,408][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: initialized
[18:52:00,408][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: starting ...
[18:52:00,566][INFO ][transport ] [Tyrak] bound_address
{inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/10.1.7.64:9301]}
[18:52:04,571][WARN ][discovery.zen.ping.unicast] [Tyrak] failed to
send ping to [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: []
[inet[localhost/127.0.0.1:9300]][discovery/zen/unicast]
[18:52:04,588][INFO ][cluster.service ] [Tyrak] new_master
[Tyrak][adc86bf5-de16-4832-9390-13846d35e421][inet[/10.1.7.64:9301]],
reason: zen-disco-join (elected_as_master)
[18:52:04,611][INFO ][discovery ] [Tyrak] IDC-local/
adc86bf5-de16-4832-9390-13846d35e421
[18:52:04,619][INFO ][http ] [Tyrak] bound_address
{inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/10.1.7.64:9201]}
[18:52:04,847][INFO ][jmx ] [Tyrak] bound_address
{service:jmx:rmi:///jndi/rmi://:9400/jmxrmi}, publish_address
{service:jmx:rmi:///jndi/rmi://10.1.7.64:9400/jmxrmi}
[18:52:04,847][INFO ][node ] [Tyrak]
{elasticsearch/0.9.0}[21101]: started

------------- 1st node (at the same time) ----------------

18:52:03,691 WARN [netty] [Astaroth / Asteroth] Exception caught on
netty layer [[id: 0x0043290c, /127.0.0.1:39484 => /127.0.0.1:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:
113)
at

org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
49)
at

org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
181)
at

org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
85)
at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
754)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
302)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:
317)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:
299)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:
216)
at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
754)
at

org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
51)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
545)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
540)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
349)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
281)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
201)
at

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at

org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

Both nodes use the same config (YAML):

cluster.name: IDC-local

discovery.zen.ping.multicast:
enabled: false

discovery.zen.ping.unicast:
hosts : ["localhost:9300"]

index.storage.type: memory

Thank you!

BR, Alex


(system) #4