Also found this from a few days ago when the same error had occurred:
[2013-03-03 19:55:13,183][DEBUG][action.admin.cluster.node.stats]
[elasticsearch-server-3] failed to execute on node [k9Dq3NsbQCq-RPmh4FS-2w]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:150)
*
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127)
*
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
*
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
*
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
*
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
*
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
*
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)
*
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
*
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
*
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
*
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
*
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:313)
*
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
*
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
*
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
*
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
*
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
*
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
*
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 300
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
*
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:121)
*
org.elasticsearch.common.io.stream.StreamInput.readInt(StreamInput.java:99)*
org.elasticsearch.common.io.stream.StreamInput.readLong(StreamInput.java:130)
*
org.elasticsearch.common.io.stream.AdapterStreamInput.readLong(AdapterStreamInput.java:93)
*
org.elasticsearch.monitor.os.OsStats.readOsStats(OsStats.java:193)*
org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:263)
*
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
*
On Wednesday, March 6, 2013 6:59:31 PM UTC-8, Govind Chandrasekhar wrote:
Every once in a while, one of my master nodes loses connection with the
other (primary and slave) nodes. When I execute 'curl -XGET
localhost:9200/_nodes', the cluster just hangs and I get no response
(cluster health reports that everything is "green" though).
I found this in my log files during one of the errors today:
2013-03-07 00:44:38,588][WARN ][transport.netty ]
[elasticsearch-server-3] exception caught on transport layer [[id:
0x30a053f6, /10.30.141.74:37560 => /10.151.17.197:9300]], closing
connection
java.io.StreamCorruptedException: invalid internal transport message
format
org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27)
*
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
*
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
*
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
*
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
*
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)
*
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
*
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
*
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
*
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
*
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:313)
*
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
*
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
*
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
*
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
*
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
*
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
*
Previous posts on the forums suggest that this error is cause by mismatch
of versions between nodes, but that's not the case for me. All nodes run
0.20.2 and function just great 99% of the time.
Any suggestions/ideas would be much appreciated.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.