I have a mini cluster of two nodes running 90.6
Both contain data, one is the master.
I have written two custom faceting plugins against 90.6.
the first worked against 90.3 and I have upgraded it
the second is brand new and was written against 90.6
Both plugins work well and pass all unit tests (running in memory node
during testing).
Both work well when only one node of my cluster is running.
Both fail when two nodes are running.
Clearly transport of query results is not working well, any thoughts on
what this might be? Next steps for trouble shooting?
Stack Trace:
[[A[2013-11-14 10:44:43,533][DEBUG][action.search.type ] [90_6_node_1]
[v2-20131113][0], node[D13zlQZ4TnyCXNBF6DwR7g], [P], s[STARTED]: Failed to
execute [org.elasticsearch.action.search.SearchRequest@44a9bbdb]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize
response of type [org.elasticsearch.search.query.QuerySearchResult]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(
MessageChannelHandler.java:148)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(
MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.
run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(
AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(
DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded
: 64
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(
AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(
ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(
AdapterStreamInput.java:35)
at org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.
java:267)
at org.elasticsearch.search.query.QuerySearchResult.readFrom(
QuerySearchResult.java:149)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(
MessageChannelHandler.java:146)
... 23 more
I have a mini cluster of two nodes running 90.6
Both contain data, one is the master.
I have written two custom faceting plugins against 90.6.
the first worked against 90.3 and I have upgraded it
the second is brand new and was written against 90.6
Both plugins work well and pass all unit tests (running in memory node
during testing).
Both work well when only one node of my cluster is running.
Both fail when two nodes are running.
Clearly transport of query results is not working well, any thoughts on
what this might be? Next steps for trouble shooting?
Stack Trace:
[[A[2013-11-14 10:44:43,533][DEBUG][action.search.type ] [90
_6_node_1] [v2-20131113][0], node[D13zlQZ4TnyCXNBF6DwR7g], [P], s[STARTED
]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@44a9bbdb]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse
(MessageChannelHandler.java:148)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.
run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run
(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 64
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(
AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(
ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(
AdapterStreamInput.java:35)
at org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput
.java:267)
at org.elasticsearch.search.query.QuerySearchResult.readFrom(
QuerySearchResult.java:149)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse
(MessageChannelHandler.java:146)
... 23 more
when running your tests, are you actually spinning up more than one node
instance in order to make sure, that at least one node has to communicate
remotely? Also you should use a client node or a transport client to
execute the queries, in order to make sure everything is working. This
ensures that your serialization is working.
You will be able to use the same test classes elasticsearch is using
internally pretty soon, as those will be provided in an extra jar - this
should simplify tests like this a lot and have the benefit of randomized
testing.
I have a mini cluster of two nodes running 90.6
Both contain data, one is the master.
I have written two custom faceting plugins against 90.6.
the first worked against 90.3 and I have upgraded it
the second is brand new and was written against 90.6
Both plugins work well and pass all unit tests (running in memory node
during testing).
Both work well when only one node of my cluster is running.
Both fail when two nodes are running.
Clearly transport of query results is not working well, any thoughts on
what this might be? Next steps for trouble shooting?
Stack Trace:
[[A[2013-11-14 10:44:43,533][DEBUG][action.search.type ] [90
_6_node_1] [v2-20131113][0], node[D13zlQZ4TnyCXNBF6DwR7g], [P], s[STARTED
]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@44a9bbdb]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:148)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector
.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 64
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(
AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(
ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(
AdapterStreamInput.java:35)
at org.elasticsearch.common.io.stream.StreamInput.readBoolean(
StreamInput.java:267)
at org.elasticsearch.search.query.QuerySearchResult.readFrom(
QuerySearchResult.java:149)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:146)
... 23 more
Thanks for this, I am now able to make this error occur under unit testing.
Clearly, I am not serializing or deserializing my custom facet results or
query correctly. Not really sure yet.
Mark
On Thursday, November 14, 2013 11:22:41 AM UTC-5, Alexander Reelsen wrote:
Hey,
when running your tests, are you actually spinning up more than one node
instance in order to make sure, that at least one node has to communicate
remotely? Also you should use a client node or a transport client to
execute the queries, in order to make sure everything is working. This
ensures that your serialization is working.
You will be able to use the same test classes elasticsearch is using
internally pretty soon, as those will be provided in an extra jar - this
should simplify tests like this a lot and have the benefit of randomized
testing.
--Alex
On Thu, Nov 14, 2013 at 5:05 PM, Leonardo Menezes <leonardo...@gmail.com<javascript:>
wrote:
Are both nodes on the same Java version? I have seen errors like that
before due to different Java version.
On Thu, Nov 14, 2013 at 4:53 PM, Mark Conlin <mark....@gmail.com<javascript:>
wrote:
I have a mini cluster of two nodes running 90.6
Both contain data, one is the master.
I have written two custom faceting plugins against 90.6.
the first worked against 90.3 and I have upgraded it
the second is brand new and was written against 90.6
Both plugins work well and pass all unit tests (running in memory node
during testing).
Both work well when only one node of my cluster is running.
Both fail when two nodes are running.
Clearly transport of query results is not working well, any thoughts on
what this might be? Next steps for trouble shooting?
Stack Trace:
[[A[2013-11-14 10:44:43,533][DEBUG][action.search.type ] [90
_6_node_1] [v2-20131113][0], node[D13zlQZ4TnyCXNBF6DwR7g], [P], s[
STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@44a9bbdb]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:148)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 64
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte
(AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(
ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(
AdapterStreamInput.java:35)
at org.elasticsearch.common.io.stream.StreamInput.readBoolean(
StreamInput.java:267)
at org.elasticsearch.search.query.QuerySearchResult.readFrom(
QuerySearchResult.java:149)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:146)
... 23 more
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
SOLVED :
It was bad writeTo / readFrom in my plugin.
Lessons learned :
you really should unit test and integration test with multiple nodes
if your master node has data, and your client app doesnt check results
for shard failures and failure message and simply consumes data from the
shards that worked, you wont see intra-node errors like this
elasticsearch logs DEBUG for these errors
On Thursday, November 14, 2013 1:24:21 PM UTC-5, Mark Conlin wrote:
Thanks for this, I am now able to make this error occur under unit
testing.
Clearly, I am not serializing or deserializing my custom facet results or
query correctly. Not really sure yet.
Mark
On Thursday, November 14, 2013 11:22:41 AM UTC-5, Alexander Reelsen wrote:
Hey,
when running your tests, are you actually spinning up more than one node
instance in order to make sure, that at least one node has to communicate
remotely? Also you should use a client node or a transport client to
execute the queries, in order to make sure everything is working. This
ensures that your serialization is working.
You will be able to use the same test classes elasticsearch is using
internally pretty soon, as those will be provided in an extra jar - this
should simplify tests like this a lot and have the benefit of randomized
testing.
I have a mini cluster of two nodes running 90.6
Both contain data, one is the master.
I have written two custom faceting plugins against 90.6.
the first worked against 90.3 and I have upgraded it
the second is brand new and was written against 90.6
Both plugins work well and pass all unit tests (running in memory node
during testing).
Both work well when only one node of my cluster is running.
Both fail when two nodes are running.
Clearly transport of query results is not working well, any thoughts on
what this might be? Next steps for trouble shooting?
Stack Trace:
[[A[2013-11-14 10:44:43,533][DEBUG][action.search.type ] [90
_6_node_1] [v2-20131113][0], node[D13zlQZ4TnyCXNBF6DwR7g], [P], s[
STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@44a9bbdb]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.search.query.
QuerySearchResult]
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:148)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived
(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived
(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived
(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
.process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
.run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 64
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.
readByte(AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte
(ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(
AdapterStreamInput.java:35)
at org.elasticsearch.common.io.stream.StreamInput.readBoolean(
StreamInput.java:267)
at org.elasticsearch.search.query.QuerySearchResult.readFrom(
QuerySearchResult.java:149)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(MessageChannelHandler.java:146)
... 23 more
Hi Mark. I am running into the same issue with custom aggregations. After reading your reply, I found a fault in my readFrom() method too and I fixed it but it still did not solve the problem. Did you need to fix anything else?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.