Logspam caused by getting index status in 1.3.3

It appears fetching index status information in Elasticsearch 1.3.3 causes
logspam and massive CPU util.

All of the loglines are of the form:

2014/09/29 21:38:56.663000 [DEBUG] action.admin.indices.status
[local-alias] [logstash-2014.09.29][4], node[KlgByctJTyiO7iC_wwXcjg], [R],
s[STARTED]: failed to executed
[org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@7cc1c6f4]
org.elasticsearch.transport.RemoteTransportException:
[remote-alias][inet[/192.168.115.43:9300]][indices/status/s]
Caused by:
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
rejected execution (queue capacity 100) on
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler@4cd03105
at
org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
at
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:219)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

This behavior did not occur in 1.1.1.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/365a0bf8-5673-42b1-b623-a3a2574ddcbf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mind running hot_threads and looking for something telling? That exception
is more describing the symptom then the problem.

On Mon, Sep 29, 2014 at 5:43 PM, schmichael michael.schurter@gmail.com
wrote:

It appears fetching index status information in Elasticsearch 1.3.3 causes
logspam and massive CPU util.

All of the loglines are of the form:

2014/09/29 21:38:56.663000 [DEBUG] action.admin.indices.status
[local-alias] [logstash-2014.09.29][4], node[KlgByctJTyiO7iC_wwXcjg], [R],
s[STARTED]: failed to executed
[org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@7cc1c6f4
]
org.elasticsearch.transport.RemoteTransportException:
[remote-alias][inet[/192.168.115.43:9300]][indices/status/s]
Caused by:
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
rejected execution (queue capacity 100) on
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler@4cd03105
at
org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
at
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:219)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

This behavior did not occur in 1.1.1.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/365a0bf8-5673-42b1-b623-a3a2574ddcbf%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/365a0bf8-5673-42b1-b623-a3a2574ddcbf%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAPmjWd3xhnfOrzCRGa7B3dxotE_%2BeerU8gij9RAk05tw__aajg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

The behavior does not seem to occur with 1.3.2.

Both kopf and bigdesk cause the error in 1.3.3, and I was running the
versions of them appropriate for 1.3.

On Monday, September 29, 2014 2:43:00 PM UTC-7, schmichael wrote:

It appears fetching index status information in Elasticsearch 1.3.3 causes
logspam and massive CPU util.

All of the loglines are of the form:

2014/09/29 21:38:56.663000 [DEBUG] action.admin.indices.status
[local-alias] [logstash-2014.09.29][4], node[KlgByctJTyiO7iC_wwXcjg], [R],
s[STARTED]: failed to executed
[org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@7cc1c6f4]
org.elasticsearch.transport.RemoteTransportException:
[remote-alias][inet[/192.168.115.43:9300]][indices/status/s]
Caused by:
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
rejected execution (queue capacity 100) on
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler@4cd03105
at
org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
at
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:219)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

This behavior did not occur in 1.1.1.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/75b7e353-fe31-4629-ae21-a89f63551798%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

There is already an issue opened for this:

Jörg

On Mon, Sep 29, 2014 at 11:52 PM, schmichael michael.schurter@gmail.com
wrote:

The behavior does not seem to occur with 1.3.2.

Both kopf and bigdesk cause the error in 1.3.3, and I was running the
versions of them appropriate for 1.3.

On Monday, September 29, 2014 2:43:00 PM UTC-7, schmichael wrote:

It appears fetching index status information in Elasticsearch 1.3.3
causes logspam and massive CPU util.

All of the loglines are of the form:

2014/09/29 21:38:56.663000 [DEBUG] action.admin.indices.status
[local-alias] [logstash-2014.09.29][4], node[KlgByctJTyiO7iC_wwXcjg], [R],
s[STARTED]: failed to executed [org.elasticsearch.action.
admin.indices.status.IndicesStatusRequest@7cc1c6f4]
org.elasticsearch.transport.RemoteTransportException:
[remote-alias][inet[/192.168.115.43:9300]][indices/status/s]
Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
rejected execution (queue capacity 100) on org.elasticsearch.transport.
netty.MessageChannelHandler$RequestHandler@4cd03105
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.
rejectedExecution(EsAbortPolicy.java:62)
at java.util.concurrent.ThreadPoolExecutor.reject(
ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(
ThreadPoolExecutor.java:1372)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(
MessageChannelHandler.java:219)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$
DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
791)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$
DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(
OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

This behavior did not occur in 1.1.1.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/75b7e353-fe31-4629-ae21-a89f63551798%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/75b7e353-fe31-4629-ae21-a89f63551798%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoF0Yx0r1dyi7Lrw0pVsE5v8r-KDgTasmt4j1F-muNoKtg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Jorg! I'm marking my thread as complete since the issue already has
a better summary of the problem than I've posted here.

On Monday, September 29, 2014 3:18:08 PM UTC-7, Jörg Prante wrote:

There is already an issue opened for this:

Stats: _status with #shards >> queue capacity failing with BroadcastShardOperationFailedException · Issue #7916 · elastic/elasticsearch · GitHub

Jörg

On Mon, Sep 29, 2014 at 11:52 PM, schmichael <michael....@gmail.com
<javascript:>> wrote:

The behavior does not seem to occur with 1.3.2.

Both kopf and bigdesk cause the error in 1.3.3, and I was running the
versions of them appropriate for 1.3.

On Monday, September 29, 2014 2:43:00 PM UTC-7, schmichael wrote:

It appears fetching index status information in Elasticsearch 1.3.3
causes logspam and massive CPU util.

All of the loglines are of the form:

2014/09/29 21:38:56.663000 [DEBUG] action.admin.indices.status
[local-alias] [logstash-2014.09.29][4], node[KlgByctJTyiO7iC_wwXcjg], [R],
s[STARTED]: failed to executed [org.elasticsearch.action.
admin.indices.status.IndicesStatusRequest@7cc1c6f4]
org.elasticsearch.transport.RemoteTransportException:
[remote-alias][inet[/192.168.115.43:9300]][indices/status/s]
Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
rejected execution (queue capacity 100) on org.elasticsearch.transport.
netty.MessageChannelHandler$RequestHandler@4cd03105
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.
rejectedExecution(EsAbortPolicy.java:62)
at java.util.concurrent.ThreadPoolExecutor.reject(
ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(
ThreadPoolExecutor.java:1372)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleRequest(MessageChannelHandler.java:219)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$
DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
791)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$
DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(
OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

This behavior did not occur in 1.1.1.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/75b7e353-fe31-4629-ae21-a89f63551798%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/75b7e353-fe31-4629-ae21-a89f63551798%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a682e78d-6454-4397-aae2-d64f12312c8d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.