I just tested with one master only node and two data only nodes and did not
have a problem. Can you recreate this issue and create a gist or a github
issue how you managed to get into that state? Also what elasticsearch
version is this?
--Alex
On Fri, Apr 4, 2014 at 8:22 PM, Andrew Mehler mehler@gmail.com wrote:
I have a cluster with 15 nodes, 10 data nodes, 3 master+clients and 2
client only
/_cluster/health
shows everything is normal:
number_of_nodes: 15,
number_of_data_nodes: 10,
but
/_nodes
is only showing the 5 non-data nodes, and not all 15.
however, if I ask for a specific information on the nodes, say
/_nodes/jvm
then all 15 will show up.
Does anyone know what is going on here? The discrepancy is causing some
plugins not to work correctly.
Thanks!
[2014-04-07 12:38:59,270][DEBUG][action.admin.cluster.node.info]
[newspd4.aoa.twosigma.com-master-0] failed to execute on node
[1OfLJ-r_RJinwXN2C0dqoQ]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize
response of type [org.elasticsearch.action.admin.cluster.node.info.NodeInfo]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.info.NodeInfo]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 12508
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:276)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at
org.elasticsearch.action.admin.cluster.node.info.PluginInfo.readFrom(PluginInfo.java:133)
at
org.elasticsearch.action.admin.cluster.node.info.PluginInfo.readPluginInfo(PluginInfo.java:126)
at
org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.readFrom(PluginsInfo.java:67)
at
org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.readPluginsInfo(PluginsInfo.java:59)
at
org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readFrom(NodeInfo.java:236)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:146)
... 23 more
(END)
On Monday, April 7, 2014 3:47:05 AM UTC-4, Alexander Reelsen wrote:
Hey,
I just tested with one master only node and two data only nodes and did
not have a problem. Can you recreate this issue and create a gist or a
github issue how you managed to get into that state? Also what
elasticsearch version is this?
--Alex
On Fri, Apr 4, 2014 at 8:22 PM, Andrew Mehler <meh...@gmail.com<javascript:>
wrote:
I have a cluster with 15 nodes, 10 data nodes, 3 master+clients and 2
client only
/_cluster/health
shows everything is normal:
number_of_nodes: 15,
number_of_data_nodes: 10,
but
/_nodes
is only showing the 5 non-data nodes, and not all 15.
however, if I ask for a specific information on the nodes, say
/_nodes/jvm
then all 15 will show up.
Does anyone know what is going on here? The discrepancy is causing some
plugins not to work correctly.
Thanks!
On Mon, Apr 7, 2014 at 2:46 PM, Andrew Mehler mehler@gmail.com wrote:
This is version 1.1
I belive I found what is causing the error
[2014-04-07 12:38:59,270][DEBUG][action.admin.cluster.node.info]
[newspd4.aoa.twosigma.com-master-0] failed to execute on node
[1OfLJ-r_RJinwXN2C0dqoQ]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.info.NodeInfo]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.info.NodeInfo]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 12508
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:276)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at
org.elasticsearch.action.admin.cluster.node.info.PluginInfo.readFrom(PluginInfo.java:133)
at
org.elasticsearch.action.admin.cluster.node.info.PluginInfo.readPluginInfo(PluginInfo.java:126)
at
org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.readFrom(PluginsInfo.java:67)
at
org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.readPluginsInfo(PluginsInfo.java:59)
at
org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readFrom(NodeInfo.java:236)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:146)
... 23 more
(END)
On Monday, April 7, 2014 3:47:05 AM UTC-4, Alexander Reelsen wrote:
Hey,
I just tested with one master only node and two data only nodes and did
not have a problem. Can you recreate this issue and create a gist or a
github issue how you managed to get into that state? Also what
elasticsearch version is this?
--Alex
On Fri, Apr 4, 2014 at 8:22 PM, Andrew Mehler meh...@gmail.com wrote:
I have a cluster with 15 nodes, 10 data nodes, 3 master+clients and 2
client only
/_cluster/health
shows everything is normal:
number_of_nodes: 15,
number_of_data_nodes: 10,
but
/_nodes
is only showing the 5 non-data nodes, and not all 15.
however, if I ask for a specific information on the nodes, say
/_nodes/jvm
then all 15 will show up.
Does anyone know what is going on here? The discrepancy is causing some
plugins not to work correctly.
Thanks!
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
On Mon, Apr 7, 2014 at 2:46 PM, Andrew Mehler <meh...@gmail.com<javascript:>
wrote:
This is version 1.1
I belive I found what is causing the error
[2014-04-07 12:38:59,270][DEBUG][action.admin.cluster.node.info]
[newspd4.aoa.twosigma.com-master-0] failed to execute on node
[1OfLJ-r_RJinwXN2C0dqoQ]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.info.NodeInfo]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.info.NodeInfo]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 12508
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:276)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at
org.elasticsearch.action.admin.cluster.node.info.PluginInfo.readFrom(PluginInfo.java:133)
at
org.elasticsearch.action.admin.cluster.node.info.PluginInfo.readPluginInfo(PluginInfo.java:126)
at
org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.readFrom(PluginsInfo.java:67)
at
org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.readPluginsInfo(PluginsInfo.java:59)
at
org.elasticsearch.action.admin.cluster.node.info.NodeInfo.readFrom(NodeInfo.java:236)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:146)
... 23 more
(END)
On Monday, April 7, 2014 3:47:05 AM UTC-4, Alexander Reelsen wrote:
Hey,
I just tested with one master only node and two data only nodes and did
not have a problem. Can you recreate this issue and create a gist or a
github issue how you managed to get into that state? Also what
elasticsearch version is this?
--Alex
On Fri, Apr 4, 2014 at 8:22 PM, Andrew Mehler meh...@gmail.com wrote:
I have a cluster with 15 nodes, 10 data nodes, 3 master+clients and 2
client only
/_cluster/health
shows everything is normal:
number_of_nodes: 15,
number_of_data_nodes: 10,
but
/_nodes
is only showing the 5 non-data nodes, and not all 15.
however, if I ask for a specific information on the nodes, say
/_nodes/jvm
then all 15 will show up.
Does anyone know what is going on here? The discrepancy is causing
some plugins not to work correctly.
Thanks!
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
On Mon, Apr 7, 2014 at 2:46 PM, Andrew Mehler meh...@gmail.com wrote:
This is version 1.1
I belive I found what is causing the error
[2014-04-07 12:38:59,270][DEBUG][action.admin.cluster.node.info]
[newspd4.aoa.twosigma.com-master-0] failed to execute on node
[1OfLJ-r_RJinwXN2C0dqoQ]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.action.
admin.cluster.node.info.NodeInfo]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.action.
admin.cluster.node.info.NodeInfo]
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:148)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(
SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.
FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(
SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.
ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 12508
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.
readByte(AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.
readByte(ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.StreamInput.readString(
StreamInput.java:276)
at org.elasticsearch.common.io.stream.HandlesStreamInput.
readString(HandlesStreamInput.java:61)
at org.elasticsearch.action.admin.cluster.node.info.
PluginInfo.readFrom(PluginInfo.java:133)
at org.elasticsearch.action.admin.cluster.node.info.
PluginInfo.readPluginInfo(PluginInfo.java:126)
at org.elasticsearch.action.admin.cluster.node.info.
PluginsInfo.readFrom(PluginsInfo.java:67)
at org.elasticsearch.action.admin.cluster.node.info.
PluginsInfo.readPluginsInfo(PluginsInfo.java:59)
at org.elasticsearch.action.admin.cluster.node.info.
NodeInfo.readFrom(NodeInfo.java:236)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:146)
... 23 more
(END)
On Monday, April 7, 2014 3:47:05 AM UTC-4, Alexander Reelsen wrote:
Hey,
I just tested with one master only node and two data only nodes and did
not have a problem. Can you recreate this issue and create a gist or a
github issue how you managed to get into that state? Also what
elasticsearch version is this?
--Alex
On Fri, Apr 4, 2014 at 8:22 PM, Andrew Mehler meh...@gmail.com wrote:
I have a cluster with 15 nodes, 10 data nodes, 3 master+clients and 2
client only
/_cluster/health
shows everything is normal:
number_of_nodes: 15,
number_of_data_nodes: 10,
but
/_nodes
is only showing the 5 non-data nodes, and not all 15.
however, if I ask for a specific information on the nodes, say
/_nodes/jvm
then all 15 will show up.
Does anyone know what is going on here? The discrepancy is causing
some plugins not to work correctly.
Thanks!
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.