Failed to deserialize exception response from stream

In attempting to set up a basic two node cluster, any replication seems to
fail. I'm unable to tell exactly why it's failing because the response
exception cannot be parsed. If I run a single node, add some data then add
a second node, it discovers the peer node fine and seems healthy until I
manipulate a doc in the first node at which point the errors below persist
until I stop the peer. I've also repeated the same behavior with the two
nodes peered and no data.

Both nodes are running 0.90.0 on 1.7.0_17, dedicated hardware, slightly
different Linux distros. ES configuration isn't too complicated, I'm using
ZooKeeper for discovery and have compression enabled for the TCP transport.

I've seen similar discussions (
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ
and
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/TKNOlZYvDHg/k3VDgwki_VcJ
).

The offending stack trace:

[2013-06-05 09:35:20,480][WARN ][action.index ] [es-1] Failed
to perform index on replica [rules][4]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize
exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:171)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at java.io.ObjectInputStream.access$300(ObjectInputStream.java:205)
at
java.io.ObjectInputStream$GetFieldImpl.readFields(ObjectInputStream.java:2132)
at java.io.ObjectInputStream.readFields(ObjectInputStream.java:537)
at java.net.InetSocketAddress.readObject(InetSocketAddress.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
... 23 more
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1] sending
failed shard for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R], s[STARTED],
reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1] received
shard failed for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R], s[STARTED],
reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

can you create an issue for this? Including your configuration, your
mapping and maybe sample data indexed?
Just to make sure: Does this happen everytime and can be reproduced
reliably?
You could test without the compression in order to check if this is an
issue and without zookeeper in order to isolate the problem a bit, if
possible.

--Alex

On Wed, Jun 5, 2013 at 7:39 PM, Erik Onnen eonnen@gmail.com wrote:

In attempting to set up a basic two node cluster, any replication seems to
fail. I'm unable to tell exactly why it's failing because the response
exception cannot be parsed. If I run a single node, add some data then add
a second node, it discovers the peer node fine and seems healthy until I
manipulate a doc in the first node at which point the errors below persist
until I stop the peer. I've also repeated the same behavior with the two
nodes peered and no data.

Both nodes are running 0.90.0 on 1.7.0_17, dedicated hardware, slightly
different Linux distros. ES configuration isn't too complicated, I'm using
ZooKeeper for discovery and have compression enabled for the TCP transport.

I've seen similar discussions (
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ
and
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/TKNOlZYvDHg/k3VDgwki_VcJ
).

The offending stack trace:

[2013-06-05 09:35:20,480][WARN ][action.index ] [es-1] Failed
to perform index on replica [rules][4]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:171)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at java.io.ObjectInputStream.access$300(ObjectInputStream.java:205)
at
java.io.ObjectInputStream$GetFieldImpl.readFields(ObjectInputStream.java:2132)
at java.io.ObjectInputStream.readFields(ObjectInputStream.java:537)
at java.net.InetSocketAddress.readObject(InetSocketAddress.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
... 23 more
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1] sending
failed shard for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R], s[STARTED],
reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1]
received shard failed for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R],
s[STARTED], reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Following up on this, I disabled compression and removed the ZooKeeper
plugin all together, the same error still occurs with both unicast and
multicast Zen discovery.

On Wednesday, June 5, 2013 10:39:50 AM UTC-7, Erik Onnen wrote:

In attempting to set up a basic two node cluster, any replication seems to
fail. I'm unable to tell exactly why it's failing because the response
exception cannot be parsed. If I run a single node, add some data then add
a second node, it discovers the peer node fine and seems healthy until I
manipulate a doc in the first node at which point the errors below persist
until I stop the peer. I've also repeated the same behavior with the two
nodes peered and no data.

Both nodes are running 0.90.0 on 1.7.0_17, dedicated hardware, slightly
different Linux distros. ES configuration isn't too complicated, I'm using
ZooKeeper for discovery and have compression enabled for the TCP transport.

I've seen similar discussions (
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ
and
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/TKNOlZYvDHg/k3VDgwki_VcJ
).

The offending stack trace:

[2013-06-05 09:35:20,480][WARN ][action.index ] [es-1] Failed
to perform index on replica [rules][4]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:171)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at java.io.ObjectInputStream.access$300(ObjectInputStream.java:205)
at
java.io.ObjectInputStream$GetFieldImpl.readFields(ObjectInputStream.java:2132)
at java.io.ObjectInputStream.readFields(ObjectInputStream.java:537)
at java.net.InetSocketAddress.readObject(InetSocketAddress.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
... 23 more
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1] sending
failed shard for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R], s[STARTED],
reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1]
received shard failed for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R],
s[STARTED], reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Following up again, bug filed here:

This only occurs for me when I run a concurrent load harness. It's
repeatable even with the most stripped down of configurations. Single
threaded, things are fine.

On Thursday, June 6, 2013 10:55:59 AM UTC-7, Erik Onnen wrote:

Following up on this, I disabled compression and removed the ZooKeeper
plugin all together, the same error still occurs with both unicast and
multicast Zen discovery.

On Wednesday, June 5, 2013 10:39:50 AM UTC-7, Erik Onnen wrote:

In attempting to set up a basic two node cluster, any replication seems
to fail. I'm unable to tell exactly why it's failing because the response
exception cannot be parsed. If I run a single node, add some data then add
a second node, it discovers the peer node fine and seems healthy until I
manipulate a doc in the first node at which point the errors below persist
until I stop the peer. I've also repeated the same behavior with the two
nodes peered and no data.

Both nodes are running 0.90.0 on 1.7.0_17, dedicated hardware, slightly
different Linux distros. ES configuration isn't too complicated, I'm using
ZooKeeper for discovery and have compression enabled for the TCP transport.

I've seen similar discussions (
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ
and
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/Caused$20by$3A$20java.io.StreamCorruptedException$3A$20unexpected$20end$20of$20block$20data|sort:relevance/elasticsearch/TKNOlZYvDHg/k3VDgwki_VcJ
).

The offending stack trace:

[2013-06-05 09:35:20,480][WARN ][action.index ] [es-1] Failed
to perform index on replica [rules][4]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:171)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at java.io.ObjectInputStream.access$300(ObjectInputStream.java:205)
at
java.io.ObjectInputStream$GetFieldImpl.readFields(ObjectInputStream.java:2132)
at java.io.ObjectInputStream.readFields(ObjectInputStream.java:537)
at java.net.InetSocketAddress.readObject(InetSocketAddress.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
... 23 more
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1]
sending failed shard for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R],
s[STARTED], reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-06-05 09:35:20,483][WARN ][cluster.action.shard ] [es-1]
received shard failed for [rules][4], node[Ed7LBQdzQMae69IzlCm-Dg], [R],
s[STARTED], reason [Failed to perform [index] on replica, message
[RemoteTransportException[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception response from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Duplicate older thread here (no solution yet)
https://groups.google.com/d/msg/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ

Jörg

Am 07.06.13 02:17, schrieb Erik Onnen:

Following up again, bug filed here:

https://github.com/elasticsearch/elasticsearch/issues/3145

This only occurs for me when I run a concurrent load harness. It's
repeatable even with the most stripped down of configurations. Single
threaded, things are fine.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.