I have got the exception shown below on ES client side (concurrent
environment). Where to dig in? In another group thread I was assured the
client itself is thread safe (stateless).So, I guess, I must tune some way
execution context. I have tried fixed thread pool. Which another Executor
can eliminate the issue? 0.90.2 is in use. Thanks!
Exception in thread "main" org.elasticsearch.action.search.
SearchPhaseExecutionException: Failed to execute phase [query_fetch], total
failure; shardFailures {RemoteTransportException[[jazz-node][inet[/192.168.
1.101:9301]][search/phase/query+fetch]]; nested:
EsRejectedExecutionException[rejected execution of [org.elasticsearch.
transport.netty.MessageChannelHandler$RequestHandler]]; }
at org.elasticsearch.action.search.type.
TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(
TransportSearchTypeAction.java:261)
at org.elasticsearch.action.search.type.
TransportSearchTypeAction$BaseAsyncAction$3.onFailure(
TransportSearchTypeAction.java:214)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
handleException(SearchServiceTransportAction.java:263)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handlerResponseError(MessageChannelHandler.java:170)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:122)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.
handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(
Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector
.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.
run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:724)
I have got the exception shown below on ES client side (concurrent
environment). Where to dig in? In another group thread I was assured the
client itself is thread safe (stateless).So, I guess, I must tune some way
execution context. I have tried fixed thread pool. Which another Executor
can eliminate the issue? 0.90.2 is in use. Thanks!
Exception in thread "main" org.elasticsearch.action.search.
SearchPhaseExecutionException: Failed to execute phase [query_fetch],total failure
; shardFailures {RemoteTransportException[[jazz-node][inet[/192.168.1.101:
9301]][search/phase/query+fetch]]; nested: EsRejectedExecutionException[rejected
execution of [org.elasticsearch.transport.netty.
MessageChannelHandler$RequestHandler]]; }
at org.elasticsearch.action.search.type.
TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(
TransportSearchTypeAction.java:261)
at org.elasticsearch.action.search.type.
TransportSearchTypeAction$BaseAsyncAction$3.onFailure(
TransportSearchTypeAction.java:214)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
handleException(SearchServiceTransportAction.java:263)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handlerResponseError(MessageChannelHandler.java:170)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(MessageChannelHandler.java:122)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived
(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived
(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived
(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
.process(AbstractNioWorker.java:109)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
.run(AbstractNioWorker.java:90)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.
run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
OK, I have tried that suggestion on client size this way (it is Scala code,
but I'm sure it is readable):
val settings = ImmutableSettings.builder.put("threadpool.search.queue_size",
-1).build
val nativeClient = nodeBuilder.client(true).settings(settings).clusterName(
clusterName).node.client
Nothing changed unfortunately...
Highly concurrent indexing works perfectly in the same environment.
OK, I have tried that suggestion on client size this way (it is Scala code, but I'm sure it is readable):
val settings = ImmutableSettings.builder.put("threadpool.search.queue_size", -1).build
val nativeClient = nodeBuilder.client(true).settings(settings).clusterName(clusterName).node.client
Nothing changed unfortunately...
Highly concurrent indexing works perfectly in the same environment.
Is there a reason why you are creating your own client instead of using the
TransportClient?
The defaults have changed in 0.90 and so far I have not had a need to
change them.
Is the problem consistent? What happens if you change the reject_policy to
caller?
--
Ivan
On Mon, Jul 22, 2013 at 6:39 PM, vinh vinh@loggly.com wrote:
This is a server side setting. So it needs to be configured in
elasticsearch.yml.
-Vinh
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.