Node stats - Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]

ES version 90.3
38 nodes cluster running on 6 machines.

This problem started after a few days of running normally.

Hitting cluster with ES Head and/or BigDesk causes this issue.

Is it possible that the node stats message have gotten too long?

2013-11-07 10:18:40,589[DEBUG][action.admin.cluster.node.stats] [
] failed to execute on node [XFjF-OfoQUW3iVAKTmi-AA]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize
response of type [org.elasticsearch.action.admin.cluster.node.stats.
NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.action.admin.
cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(
MessageChannelHandler.java:147)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(
MessageChannelHandler.java:124)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.
run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(
AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(
DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded
: 2380

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

It often happens when your nodes has not the same version or when the JVM is not exactly the same.
Could it be the cause here?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr

7 novembre 2013 at 16:28:01, Mark Conlin (mark.conlin@gmail.com) a écrit:

ES version 90.3
38 nodes cluster running on 6 machines.

This problem started after a few days of running normally.

Hitting cluster with ES Head and/or BigDesk causes this issue.

Is it possible that the node stats message have gotten too long?

2013-11-07 10:18:40,589[DEBUG][action.admin.cluster.node.stats] [] failed to execute on node [XFjF-OfoQUW3iVAKTmi-AA]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:147)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:124)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 2380

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I just doubled confirmed all running 90.3 and same JDK.

On Thursday, November 7, 2013 10:37:11 AM UTC-5, David Pilato wrote:

It often happens when your nodes has not the same version or when the JVM
is not exactly the same.
Could it be the cause here?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr

7 novembre 2013 at 16:28:01, Mark Conlin (mark....@gmail.com <javascript:>)
a écrit:

ES version 90.3
38 nodes cluster running on 6 machines.

This problem started after a few days of running normally.

Hitting cluster with ES Head and/or BigDesk causes this issue.

Is it possible that the node stats message have gotten too long?

2013-11-07 10:18:40,589[DEBUG][action.admin.cluster.node.stats] [] failed to execute on node [XFjF-OfoQUW3iVAKTmi-AA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.action.admin.cluster.node.
stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.action.admin.
cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse
(MessageChannelHandler.java:147)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:124)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.
run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run
(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 2380

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I think you should first consider to upgrade your cluster to 0.90.6 as many issues has been fixed for a while.
Also, but it's probably not related to this, why are you running 38 nodes on 6 machines instead of 6 nodes only?

Do you have so much memory per machine?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr

7 novembre 2013 at 16:51:37, Mark Conlin (mark.conlin@gmail.com) a écrit:

I just doubled confirmed all running 90.3 and same JDK.

On Thursday, November 7, 2013 10:37:11 AM UTC-5, David Pilato wrote:
It often happens when your nodes has not the same version or when the JVM is not exactly the same.
Could it be the cause here?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr

7 novembre 2013 at 16:28:01, Mark Conlin (mark....@gmail.com) a écrit:

ES version 90.3
38 nodes cluster running on 6 machines.

This problem started after a few days of running normally.

Hitting cluster with ES Head and/or BigDesk causes this issue.

Is it possible that the node stats message have gotten too long?

2013-11-07 10:18:40,589[DEBUG][action.admin.cluster.node.stats] [] failed to execute on node [XFjF-OfoQUW3iVAKTmi-AA]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:147)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:124)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 2380

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Have you tried upgrading to the lastest version of BigDesk released a few
weeks ago? It relieves some of the traffic pressures it was exerting on the
cluster.

Cheers,

Ivan

On Thu, Nov 7, 2013 at 7:58 AM, David Pilato david@pilato.fr wrote:

I think you should first consider to upgrade your cluster to 0.90.6 as
many issues has been fixed for a while.
Also, but it's probably not related to this, why are you running 38 nodes
on 6 machines instead of 6 nodes only?

Do you have so much memory per machine?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr

7 novembre 2013 at 16:51:37, Mark Conlin (mark.conlin@gmail.com//mark.conlin@gmail.com)
a écrit:

I just doubled confirmed all running 90.3 and same JDK.

On Thursday, November 7, 2013 10:37:11 AM UTC-5, David Pilato wrote:

It often happens when your nodes has not the same version or when the
JVM is not exactly the same.
Could it be the cause here?

 --

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr

7 novembre 2013 at 16:28:01, Mark Conlin (mark....@gmail.com) a écrit:

ES version 90.3
38 nodes cluster running on 6 machines.

This problem started after a few days of running normally.

Hitting cluster with ES Head and/or BigDesk causes this issue.

Is it possible that the node stats message have gotten too long?

2013-11-07 10:18:40,589[DEBUG][action.admin.cluster.node.stats] [] failed to execute on node [XFjF-OfoQUW3iVAKTmi-AA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.action.admin.cluster.node
.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.action.admin.
cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.hand
leResponse(MessageChannelHandler.java:147)
at org.elasticsearch.transport.netty.MessageChannelHandler.mess
ageReceived(MessageChannelHandler.java:124)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipelin
e$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector
.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 2380

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

May be you can try to execute cluster state, nodes info, nodes stats or
indices status API calls manually from command line via curl to see if this
happens as well (just to make sure Bigdesk or Head is not the bad guy here).
Also I found mail thread with similar issue here
https://groups.google.com/forum/#!msg/elasticsearch/TQk3sC8DuaU/6oAEOW2zvVcJtry
to check if you are not in the same situation.

Lukas

On Fri, Nov 8, 2013 at 4:59 PM, Ivan Brusic ivan@brusic.com wrote:

Have you tried upgrading to the lastest version of BigDesk released a few
weeks ago? It relieves some of the traffic pressures it was exerting on the
cluster.

Reduce the amount of the data pulled via HTTP/REST · Issue #41 · lukas-vlcek/bigdesk · GitHub

Cheers,

Ivan

On Thu, Nov 7, 2013 at 7:58 AM, David Pilato david@pilato.fr wrote:

I think you should first consider to upgrade your cluster to 0.90.6 as
many issues has been fixed for a while.
Also, but it's probably not related to this, why are you running 38 nodes
on 6 machines instead of 6 nodes only?

Do you have so much memory per machine?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr

7 novembre 2013 at 16:51:37, Mark Conlin (mark.conlin@gmail.com//mark.conlin@gmail.com)
a écrit:

I just doubled confirmed all running 90.3 and same JDK.

On Thursday, November 7, 2013 10:37:11 AM UTC-5, David Pilato wrote:

It often happens when your nodes has not the same version or when the
JVM is not exactly the same.
Could it be the cause here?

 --

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr

7 novembre 2013 at 16:28:01, Mark Conlin (mark....@gmail.com) a écrit:

ES version 90.3
38 nodes cluster running on 6 machines.

This problem started after a few days of running normally.

Hitting cluster with ES Head and/or BigDesk causes this issue.

Is it possible that the node stats message have gotten too long?

2013-11-07 10:18:40,589[DEBUG][action.admin.cluster.node.stats] [] failed to execute on node [XFjF-OfoQUW3iVAKTmi-AA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.action.admin.cluster.
node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.action.admin.
cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.hand
leResponse(MessageChannelHandler.java:147)
at org.elasticsearch.transport.netty.MessageChannelHandler.mess
ageReceived(MessageChannelHandler.java:124)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipelin
e$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNi
oSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
lExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 2380

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

A restart stopped this issue, which is not very comforting.

The issues was repeatable from curl/or url bar of browser. It was not an
issue with head or bigdesk only.

I question if this issue logging should be marked debug, warn seems more
appropriate?

I am upgrading to 90.6 in the next day or two, if the issue never comes
back I will be sure to post as such here that the upgrade did fix it.

Mark

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.