Correct way to restart cluster / rejoin failed nodes

Hi there.

I am currently 'stress testing' ES to see if it can cope with out projected
analyitcs work load (50m docs / month, 12 month retention).

I am using a cluster of 4x c3xlarge AWS instances (7.5GB ram, 4 core, SSDs,
no swap). Jave 1.6, but going to restart the cluster in 1.7 if I can to
test.

When I get to around 100m documents into the database (5 indexes of around
20m documents each, 5 shards), I start to see issues:

  1. Occasionally nodes will timeout and get ejected from the cluster. For
    example running a faceted query earlier made the cluster freeze and when it
    came back, 2 of the 4 nodes had been ejected with only 2 nodes left in the
    cluster.

  2. When I shut down all nodes and restart each node in term, the node lock
    up at 100% CPU. No nodes can talk to each other as they all get timeout
    errors. I cannot access head or bigdesk as the browser keeps trying to
    connect with a 'waiting for socket' message. Eventiually I may see errors
    like:

[2013-11-25 11:01:42,300][DEBUG][action.admin.cluster.node.stats] [Scanner]
failed to execute on node [Hw-xbFGeRe68z9EfS5U7jA]
org.elasticsearch.index.engine.EngineClosedException: [sessions_201302][0]
CurrentState[CLOSED]
at
org.elasticsearch.index.engine.robin.RobinEngine.ensureOpen(RobinEngine.java:969)
at
org.elasticsearch.index.engine.robin.RobinEngine.segmentsStats(RobinEngine.java:1181)
at
org.elasticsearch.index.shard.service.InternalIndexShard.segmentStats(InternalIndexShard.java:509)
at
org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:154)
at
org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)
at org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)
at
org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)
at
org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)
at
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

And:

[2013-11-25 11:05:55,694][WARN ][index.warmer ] [Scanner]
[sessions_201303][1] failed to warm-up id cache
java.lang.OutOfMemoryError: Java heap space
[2013-11-25 11:11:24,730][WARN
][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the
selector loop.
java.lang.OutOfMemoryError: Java heap space

So it appears the the cluster cannot ever restart itself as there is not
enough memory on the nodes to recover its indexes. Is that correct? Would
the unresponsivness be due to lack of heap space?

Al

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

(oh, ES_HEAP_SIZE = 4g)

On Monday, 25 November 2013 11:19:42 UTC, Alastair James wrote:

Hi there.

I am currently 'stress testing' ES to see if it can cope with out
projected analyitcs work load (50m docs / month, 12 month retention).

I am using a cluster of 4x c3xlarge AWS instances (7.5GB ram, 4 core,
SSDs, no swap). Jave 1.6, but going to restart the cluster in 1.7 if I can
to test.

When I get to around 100m documents into the database (5 indexes of around
20m documents each, 5 shards), I start to see issues:

  1. Occasionally nodes will timeout and get ejected from the cluster. For
    example running a faceted query earlier made the cluster freeze and when it
    came back, 2 of the 4 nodes had been ejected with only 2 nodes left in the
    cluster.

  2. When I shut down all nodes and restart each node in term, the node lock
    up at 100% CPU. No nodes can talk to each other as they all get timeout
    errors. I cannot access head or bigdesk as the browser keeps trying to
    connect with a 'waiting for socket' message. Eventiually I may see errors
    like:

[2013-11-25 11:01:42,300][DEBUG][action.admin.cluster.node.stats]
[Scanner] failed to execute on node [Hw-xbFGeRe68z9EfS5U7jA]
org.elasticsearch.index.engine.EngineClosedException: [sessions_201302][0]
CurrentState[CLOSED]
at
org.elasticsearch.index.engine.robin.RobinEngine.ensureOpen(RobinEngine.java:969)
at
org.elasticsearch.index.engine.robin.RobinEngine.segmentsStats(RobinEngine.java:1181)
at
org.elasticsearch.index.shard.service.InternalIndexShard.segmentStats(InternalIndexShard.java:509)
at
org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:154)
at
org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)
at org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)
at
org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)
at
org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)
at
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

And:

[2013-11-25 11:05:55,694][WARN ][index.warmer ] [Scanner]
[sessions_201303][1] failed to warm-up id cache
java.lang.OutOfMemoryError: Java heap space
[2013-11-25 11:11:24,730][WARN
][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the
selector loop.
java.lang.OutOfMemoryError: Java heap space

So it appears the the cluster cannot ever restart itself as there is not
enough memory on the nodes to recover its indexes. Is that correct? Would
the unresponsivness be due to lack of heap space?

Al

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

FYI: Here is log dump from one node, now the cluster is totally frozen.

[2013-11-25 11:54:28,806][INFO ][monitor.jvm ] [Kang the
Conqueror] [gc][ConcurrentMarkSweep][626][149] duration [44.5s],
collections [5]/[44.5s], total [44.5s]/[20.3m], memory
[3.9gb]->[3.9gb]/[3.9gb], all_pools {[Code Cache]
[3.5mb]->[3.6mb]/[48mb]}{[Par Eden Space]
[266.2mb]->[266.2mb]/[266.2mb]}{[Par Survivor Space]
[33.2mb]->[32.6mb]/[33.2mb]}{[CMS Old Gen] [3.6gb]->[3.6gb]/[3.6gb]}{[CMS
Perm Gen] [30mb]->[30mb]/[166mb]}
[2013-11-25 11:55:00,619][WARN
][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the
selector loop.
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.netty.buffer.HeapChannelBuffer.(HeapChannelBuffer.java:42)
at
org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.(BigEndianHeapChannelBuffer.java:34)
at
org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at
org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:68)
at
org.elasticsearch.common.netty.buffer.AbstractChannelBufferFactory.getBuffer(AbstractChannelBufferFactory.java:48)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:80)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-11-25 11:55:57,842][INFO ][monitor.jvm ] [Kang the
Conqueror] [gc][ConcurrentMarkSweep][627][155] duration [53.3s],
collections [6]/[53.4s], total [53.3s]/[21.2m], memory
[3.9gb]->[3.9gb]/[3.9gb], all_pools {[Code Cache]
[3.6mb]->[3.6mb]/[48mb]}{[Par Eden Space]
[266.2mb]->[266.2mb]/[266.2mb]}{[Par Survivor Space]
[32.6mb]->[32.9mb]/[33.2mb]}{[CMS Old Gen] [3.6gb]->[3.6gb]/[3.6gb]}{[CMS
Perm Gen] [30mb]->[30.1mb]/[166mb]}
[2013-11-25 12:02:41,437][WARN ][transport.netty ] [Kang the
Conqueror] exception caught on transport layer [[id: 0xaa91643d,
/10.38.128.233:55243 => /10.38.129.189:9300]], closing connection
java.lang.OutOfMemoryError: Java heap space
[2013-11-25 12:02:41,438][WARN ][transport.netty ] [Kang the
Conqueror] exception caught on transport layer [[id: 0xaa91643d,
/10.38.128.233:55243 :> /10.38.129.189:9300]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format
at
org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-11-25 12:03:15,598][WARN ][transport.netty ] [Kang the
Conqueror] Message not fully read (request) for [35855] and action
[index/shard/recovery/fileChunk], resetting
[2013-11-25 12:03:15,600][WARN ][indices.cluster ] [Kang the
Conqueror] [sessions_201304][3] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException:
[sessions_201304][3]: Recovery failed from [Tagak the Leopard
Lord][Pbn7aDJHR1-sVmKq4Fl95w][inet[/10.38.130.177:9300]] into [Kang the
Conqueror][Sx1Xb_lRRyOoJk2RkjzfGg][inet[/10.38.129.189:9300]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
at
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
at
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.transport.RemoteTransportException: [Tagak the
Leopard Lord][inet[/10.38.130.177:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException:
[sessions_201304][3] Phase[2] Execution failed
at
org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1156)
at
org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:590)
at
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:117)
at
org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:61)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:333)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:319)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException:
[Kang the
Conqueror][inet[/10.38.129.189:9300]][index/shard/recovery/prepareTranslog]
request_id [11504] timed out after [900005ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
... 3 more

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You can ignore the debug level engine closed error. That is likely from a bug that is fixed for the next release. The out of memory is much more likely trouble. I don't know the size of amazon instances off hand but the rule of thumb for heap size is min(30gb, half of ram). More heap should help.

Sent from my iPhone

On Nov 25, 2013, at 7:10 AM, Alastair James al.james@gmail.com wrote:

FYI: Here is log dump from one node, now the cluster is totally frozen.

[2013-11-25 11:54:28,806][INFO ][monitor.jvm ] [Kang the Conqueror] [gc][ConcurrentMarkSweep][626][149] duration [44.5s], collections [5]/[44.5s], total [44.5s]/[20.3m], memory [3.9gb]->[3.9gb]/[3.9gb], all_pools {[Code Cache] [3.5mb]->[3.6mb]/[48mb]}{[Par Eden Space] [266.2mb]->[266.2mb]/[266.2mb]}{[Par Survivor Space] [33.2mb]->[32.6mb]/[33.2mb]}{[CMS Old Gen] [3.6gb]->[3.6gb]/[3.6gb]}{[CMS Perm Gen] [30mb]->[30mb]/[166mb]}
[2013-11-25 11:55:00,619][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.(HeapChannelBuffer.java:42)
at org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.(BigEndianHeapChannelBuffer.java:34)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:68)
at org.elasticsearch.common.netty.buffer.AbstractChannelBufferFactory.getBuffer(AbstractChannelBufferFactory.java:48)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-11-25 11:55:57,842][INFO ][monitor.jvm ] [Kang the Conqueror] [gc][ConcurrentMarkSweep][627][155] duration [53.3s], collections [6]/[53.4s], total [53.3s]/[21.2m], memory [3.9gb]->[3.9gb]/[3.9gb], all_pools {[Code Cache] [3.6mb]->[3.6mb]/[48mb]}{[Par Eden Space] [266.2mb]->[266.2mb]/[266.2mb]}{[Par Survivor Space] [32.6mb]->[32.9mb]/[33.2mb]}{[CMS Old Gen] [3.6gb]->[3.6gb]/[3.6gb]}{[CMS Perm Gen] [30mb]->[30.1mb]/[166mb]}
[2013-11-25 12:02:41,437][WARN ][transport.netty ] [Kang the Conqueror] exception caught on transport layer [[id: 0xaa91643d, /10.38.128.233:55243 => /10.38.129.189:9300]], closing connection
java.lang.OutOfMemoryError: Java heap space
[2013-11-25 12:02:41,438][WARN ][transport.netty ] [Kang the Conqueror] exception caught on transport layer [[id: 0xaa91643d, /10.38.128.233:55243 :> /10.38.129.189:9300]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-11-25 12:03:15,598][WARN ][transport.netty ] [Kang the Conqueror] Message not fully read (request) for [35855] and action [index/shard/recovery/fileChunk], resetting
[2013-11-25 12:03:15,600][WARN ][indices.cluster ] [Kang the Conqueror] [sessions_201304][3] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [sessions_201304][3]: Recovery failed from [Tagak the Leopard Lord][Pbn7aDJHR1-sVmKq4Fl95w][inet[/10.38.130.177:9300]] into [Kang the Conqueror][Sx1Xb_lRRyOoJk2RkjzfGg][inet[/10.38.129.189:9300]]
at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
at org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
at org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.transport.RemoteTransportException: [Tagak the Leopard Lord][inet[/10.38.130.177:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: [sessions_201304][3] Phase[2] Execution failed
at org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1156)
at org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:590)
at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:117)
at org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:61)
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:333)
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:319)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [Kang the Conqueror][inet[/10.38.129.189:9300]][index/shard/recovery/prepareTranslog] request_id [11504] timed out after [900005ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
... 3 more

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Great thanks.

The servers have 8GB each, so 4GB HEAP. So it seems like more memory will
help. Is there any guidance of how much memory I need for certain index
sizes and facet cardinalities?

AL

On Monday, 25 November 2013 12:39:18 UTC, Nikolas Everett wrote:

You can ignore the debug level engine closed error. That is likely from a
bug that is fixed for the next release. The out of memory is much more
likely trouble. I don't know the size of amazon instances off hand but the
rule of thumb for heap size is min(30gb, half of ram). More heap should
help.

Sent from my iPhone

On Nov 25, 2013, at 7:10 AM, Alastair James <al.j...@gmail.com<javascript:>>
wrote:

FYI: Here is log dump from one node, now the cluster is totally frozen.

[2013-11-25 11:54:28,806][INFO ][monitor.jvm ] [Kang the
Conqueror] [gc][ConcurrentMarkSweep][626][149] duration [44.5s],
collections [5]/[44.5s], total [44.5s]/[20.3m], memory
[3.9gb]->[3.9gb]/[3.9gb], all_pools {[Code Cache]
[3.5mb]->[3.6mb]/[48mb]}{[Par Eden Space]
[266.2mb]->[266.2mb]/[266.2mb]}{[Par Survivor Space]
[33.2mb]->[32.6mb]/[33.2mb]}{[CMS Old Gen] [3.6gb]->[3.6gb]/[3.6gb]}{[CMS
Perm Gen] [30mb]->[30mb]/[166mb]}
[2013-11-25 11:55:00,619][WARN
][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the
selector loop.
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.netty.buffer.HeapChannelBuffer.(HeapChannelBuffer.java:42)
at
org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.(BigEndianHeapChannelBuffer.java:34)
at
org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at
org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:68)
at
org.elasticsearch.common.netty.buffer.AbstractChannelBufferFactory.getBuffer(AbstractChannelBufferFactory.java:48)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:80)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-11-25 11:55:57,842][INFO ][monitor.jvm ] [Kang the
Conqueror] [gc][ConcurrentMarkSweep][627][155] duration [53.3s],
collections [6]/[53.4s], total [53.3s]/[21.2m], memory
[3.9gb]->[3.9gb]/[3.9gb], all_pools {[Code Cache]
[3.6mb]->[3.6mb]/[48mb]}{[Par Eden Space]
[266.2mb]->[266.2mb]/[266.2mb]}{[Par Survivor Space]
[32.6mb]->[32.9mb]/[33.2mb]}{[CMS Old Gen] [3.6gb]->[3.6gb]/[3.6gb]}{[CMS
Perm Gen] [30mb]->[30.1mb]/[166mb]}
[2013-11-25 12:02:41,437][WARN ][transport.netty ] [Kang the
Conqueror] exception caught on transport layer [[id: 0xaa91643d, /
10.38.128.233:55243http://www.google.com/url?q=http%3A%2F%2F10.38.128.233%3A55243&sa=D&sntz=1&usg=AFQjCNHyLLRPpXWQBGp0vF-_1-tLFohabQ=> /10.38.129.189:9300]], closing connection
java.lang.OutOfMemoryError: Java heap space
[2013-11-25 12:02:41,438][WARN ][transport.netty ] [Kang the
Conqueror] exception caught on transport layer [[id: 0xaa91643d, /
10.38.128.233:55243http://www.google.com/url?q=http%3A%2F%2F10.38.128.233%3A55243&sa=D&sntz=1&usg=AFQjCNHyLLRPpXWQBGp0vF-_1-tLFohabQ:> /10.38.129.189:9300]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format
at
org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-11-25 12:03:15,598][WARN ][transport.netty ] [Kang the
Conqueror] Message not fully read (request) for [35855] and action
[index/shard/recovery/fileChunk], resetting
[2013-11-25 12:03:15,600][WARN ][indices.cluster ] [Kang the
Conqueror] [sessions_201304][3] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException:
[sessions_201304][3]: Recovery failed from [Tagak the Leopard
Lord][Pbn7aDJHR1-sVmKq4Fl95w][inet[/10.38.130.177:9300http://www.google.com/url?q=http%3A%2F%2F10.38.130.177%3A9300&sa=D&sntz=1&usg=AFQjCNGG8MwHbtWLfyzgPNb4nyRWGSgEGA]]
into [Kang the Conqueror][Sx1Xb_lRRyOoJk2RkjzfGg][inet[/10.38.129.189:9300http://www.google.com/url?q=http%3A%2F%2F10.38.129.189%3A9300&sa=D&sntz=1&usg=AFQjCNGqlKxoGhtQlvWGv-L3DW8FLZvYhg
]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
at
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
at
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.transport.RemoteTransportException: [Tagak
the Leopard
Lord][inet[/10.38.130.177:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException:
[sessions_201304][3] Phase[2] Execution failed
at
org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1156)
at
org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:590)
at
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:117)
at
org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:61)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:333)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:319)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException:
[Kang the
Conqueror][inet[/10.38.129.189:9300]][index/shard/recovery/prepareTranslog]
request_id [11504] timed out after [900005ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
... 3 more

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.