Recently setup a cluster using 0.19.2 and created two large indices.
Upgraded to 0.19.8 (without reindexing) and now I am seeing the
following errors over and over again:
[2012-08-22 11:46:39,087][DEBUG][action.admin.cluster.node.stats]
[node1] failed to execute on node [rbJHTV4rRoe2QMC15PT4PA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:282)
...
No queries were being executed on this cluster. The only clients have
been bigdesk and head.
It appears that the error occurs when connecting via head or bigdesk,
so I am assuming they are using an older API.
Does anyone know what the changes were? I can attempt to patch both
products with some guidance.
Cheers,
Ivan
On Wed, Aug 22, 2012 at 11:58 AM, Ivan Brusic ivan@brusic.com wrote:
Recently setup a cluster using 0.19.2 and created two large indices.
Upgraded to 0.19.8 (without reindexing) and now I am seeing the
following errors over and over again:
[2012-08-22 11:46:39,087][DEBUG][action.admin.cluster.node.stats]
[node1] failed to execute on node [rbJHTV4rRoe2QMC15PT4PA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:282)
...
No queries were being executed on this cluster. The only clients have
been bigdesk and head.
Trying to debug another issue (one of my nodes cannot seem to start and
accept shards), and these warnings are filling up my logs. Here are a few
more: NodeStats serialization error · GitHub
All nodes and clients are 0.19.8. Bigdesk/Head should be the latest.
--
Ivan
On Wed, Aug 22, 2012 at 12:06 PM, Ivan Brusic ivan@brusic.com wrote:
It appears that the error occurs when connecting via head or bigdesk,
so I am assuming they are using an older API.
Does anyone know what the changes were? I can attempt to patch both
products with some guidance.
Cheers,
Ivan
On Wed, Aug 22, 2012 at 11:58 AM, Ivan Brusic ivan@brusic.com wrote:
Recently setup a cluster using 0.19.2 and created two large indices.
Upgraded to 0.19.8 (without reindexing) and now I am seeing the
following errors over and over again:
[2012-08-22 11:46:39,087][DEBUG][action.admin.cluster.node.stats]
[node1] failed to execute on node [rbJHTV4rRoe2QMC15PT4PA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:282)
...
No queries were being executed on this cluster. The only clients have
been bigdesk and head.
I'm seeing the same warnings in 0.19.9 and 0.19.10 when the latest
Bigdesk/Head make any requests to ES with logstash connected to ES using
the "elasticsearch" output at http://logstash.net/docs/1.1.1/outputs/elasticsearch:
[2012-10-10 09:16:52,771][DEBUG][action.admin.cluster.node.stats]
[wardentest2] failed to execute on node [9pcrQ8USTzGeA-1fcQRCIw]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize
response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:150)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:390)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Expected handle header, got [17]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:88)
at
org.elasticsearch.monitor.jvm.JvmStats$BufferPool.readFrom(JvmStats.java:945)
at
org.elasticsearch.monitor.jvm.JvmStats.readFrom(JvmStats.java:426)
at
org.elasticsearch.monitor.jvm.JvmStats.readJvmStats(JvmStats.java:408)
at
org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:269)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
... 22 more
The problem seems to be when something tries to query the "node stats" for
the "node" created by logstash. Head can parse info > cluster node info:
I'm not quite sure where to look next. I think there may be something
strange/wrong about the way logstash is connecting to ES that these plugins
don't like.
On Monday, October 1, 2012 7:12:08 PM UTC-4, Ivan Brusic wrote:
Trying to debug another issue (one of my nodes cannot seem to start and
accept shards), and these warnings are filling up my logs. Here are a few
more: NodeStats serialization error · GitHub
All nodes and clients are 0.19.8. Bigdesk/Head should be the latest.
--
Ivan
On Wed, Aug 22, 2012 at 12:06 PM, Ivan Brusic <iv...@brusic.com<javascript:>
wrote:
It appears that the error occurs when connecting via head or bigdesk,
so I am assuming they are using an older API.
Does anyone know what the changes were? I can attempt to patch both
products with some guidance.
Cheers,
Ivan
On Wed, Aug 22, 2012 at 11:58 AM, Ivan Brusic <iv...@brusic.com<javascript:>>
wrote:
Recently setup a cluster using 0.19.2 and created two large indices.
Upgraded to 0.19.8 (without reindexing) and now I am seeing the
following errors over and over again:
[2012-08-22 11:46:39,087][DEBUG][action.admin.cluster.node.stats]
[node1] failed to execute on node [rbJHTV4rRoe2QMC15PT4PA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:282)
...
No queries were being executed on this cluster. The only clients have
been bigdesk and head.
The node stats serialization has changed a bit between versions of 0.19.x, which causes the problems. If you are using the node stats API, you need to make sure that all ES nodes are on the same 0.19.x version.
I'm seeing the same warnings in 0.19.9 and 0.19.10 when the latest Bigdesk/Head make any requests to ES with logstash connected to ES using the "elasticsearch" output at http://logstash.net/docs/1.1.1/outputs/elasticsearch:
[2012-10-10 09:16:52,771][DEBUG][action.admin.cluster.node.stats] [wardentest2] failed to execute on node [9pcrQ8USTzGeA-1fcQRCIw]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:150)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:390)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Expected handle header, got [17]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:88)
at org.elasticsearch.monitor.jvm.JvmStats$BufferPool.readFrom(JvmStats.java:945)
at org.elasticsearch.monitor.jvm.JvmStats.readFrom(JvmStats.java:426)
at org.elasticsearch.monitor.jvm.JvmStats.readJvmStats(JvmStats.java:408)
at org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:269)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
... 22 more
The problem seems to be when something tries to query the "node stats" for the "node" created by logstash. Head can parse info > cluster node info:
I'm not quite sure where to look next. I think there may be something strange/wrong about the way logstash is connecting to ES that these plugins don't like.
On Monday, October 1, 2012 7:12:08 PM UTC-4, Ivan Brusic wrote:
Trying to debug another issue (one of my nodes cannot seem to start and accept shards), and these warnings are filling up my logs. Here are a few more: NodeStats serialization error · GitHub
All nodes and clients are 0.19.8. Bigdesk/Head should be the latest.
--
Ivan
On Wed, Aug 22, 2012 at 12:06 PM, Ivan Brusic iv...@brusic.com wrote:
It appears that the error occurs when connecting via head or bigdesk,
so I am assuming they are using an older API.
Does anyone know what the changes were? I can attempt to patch both
products with some guidance.
Cheers,
Ivan
On Wed, Aug 22, 2012 at 11:58 AM, Ivan Brusic iv...@brusic.com wrote:
Recently setup a cluster using 0.19.2 and created two large indices.
Upgraded to 0.19.8 (without reindexing) and now I am seeing the
following errors over and over again:
[2012-08-22 11:46:39,087][DEBUG][action.admin.cluster.node.stats]
[node1] failed to execute on node [rbJHTV4rRoe2QMC15PT4PA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:282)
...
No queries were being executed on this cluster. The only clients have
been bigdesk and head.
If the culprit is bigdesk/head, doesn't that means that the REST interface
(RestNodesInfoAction?) is not sending the proper request? Or are there
different versions of bigdesk/head depending on the ES version and the
wrong one is being used.
I'm back on 0.19.2, so I haven't seen the issue in a while.
--
Ivan
On Thu, Oct 11, 2012 at 8:02 AM, Shay Banon kimchy@gmail.com wrote:
The node stats serialization has changed a bit between versions of 0.19.x,
which causes the problems. If you are using the node stats API, you need to
make sure that all ES nodes are on the same 0.19.x version.
I'm seeing the same warnings in 0.19.9 and 0.19.10 when the latest
Bigdesk/Head make any requests to ES with logstash connected to ES using
the "elasticsearch" output at http://logstash.net/docs/1.1.1/outputs/elasticsearch:
[2012-10-10 09:16:52,771][DEBUG][action.admin.cluster.node.stats]
[wardentest2] failed to execute on node [9pcrQ8USTzGeA-1fcQRCIw]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:150)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:390)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Expected handle header, got [17]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:88)
at
org.elasticsearch.monitor.jvm.JvmStats$BufferPool.readFrom(JvmStats.java:945)
at
org.elasticsearch.monitor.jvm.JvmStats.readFrom(JvmStats.java:426)
at
org.elasticsearch.monitor.jvm.JvmStats.readJvmStats(JvmStats.java:408)
at
org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:269)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
... 22 more
The problem seems to be when something tries to query the "node stats" for
the "node" created by logstash. Head can parse info > cluster node info:
I'm not quite sure where to look next. I think there may be something
strange/wrong about the way logstash is connecting to ES that these plugins
don't like.
On Monday, October 1, 2012 7:12:08 PM UTC-4, Ivan Brusic wrote:
All nodes and clients are 0.19.8. Bigdesk/Head should be the latest.
--
Ivan
On Wed, Aug 22, 2012 at 12:06 PM, Ivan Brusic iv...@brusic.com wrote:
It appears that the error occurs when connecting via head or bigdesk,
so I am assuming they are using an older API.
Does anyone know what the changes were? I can attempt to patch both
products with some guidance.
Cheers,
Ivan
On Wed, Aug 22, 2012 at 11:58 AM, Ivan Brusic iv...@brusic.com wrote:
Recently setup a cluster using 0.19.2 and created two large indices.
Upgraded to 0.19.8 (without reindexing) and now I am seeing the
following errors over and over again:
[2012-08-22 11:46:39,087][DEBUG][action.**admin.cluster.node.stats]
[node1] failed to execute on node [rbJHTV4rRoe2QMC15PT4PA]
org.elasticsearch.transport.**RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.
TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.**admin.cluster.node.stats.**NodeStats]
at org.elasticsearch.transport.*netty.MessageChannelHandler.
*handleResponse(**MessageChannelHandler.java:**282)
...
No queries were being executed on this cluster. The only clients have
been bigdesk and head.
I think as bigdesk hits a node , this node necessarily has to pull cluster
and node info from the other nodes. So the error is due to inter node comms
triggered by bigdesk or any other client polling the perf data Apis.
I think as bigdesk hits a node , this node necessarily has to pull cluster and node info from the other nodes. So the error is due to inter node comms triggered by bigdesk or any other client polling the perf data Apis.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.