My problem is pretty similar to this one. We have a dedicated monitoring cluster with 2 nodes that we upgraded to 5.6.9 from 5.4 a month ago. Recently, we have not been able to load the "Advanced Tab" for any individual node. Every time we try we get a 503 search_phase_execution_exception. After looking into the error logs I found that this is due to us blowing through the thread limit for searches.
Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of org.elasticsearch.action.search.FetchSearchPhase$1@6fed4e8b on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@39018d64[Running, pool size = 7, active threads = 7, queued tasks = 1003, completed tasks = 111115001]]
This monitoring cluster is not heavily used and when I pull up the stats for searching on it you can see it really is not doing anything until we try to load one of the monitoring pages. Both node search stats look similar to this one.
We have tried to close old indexes in an effort to solve this issue but it has not worked. The cluster currently has 280 open .monitoring-es and .monitoring-kibana indexes
Unlike our main cluster which I have control over how we handle searching I don't know much about how Kibana executes it searches so I am not sure what the best way to go about fixing this is.
As you can see, I tried to put some search load on my cluster, and I did that by clicking the pause/refresh button in the Advanced Node page repeatedly.
If you close all other browser pages searching against the monitoring data, and watch that chart for awhile, you should be able to see the queue go down. I would wait quite a bit and see you can get it to go really down. Once it is really down, you should be able to open the Advanced Node page in the monitoring application.
My visualization looks very similar to yours. It never really increases past two and when I try to load that advanced node page the page throws a 503 and there is no change in this graph. If I view the node_stats.thread_pool.search.rejected max it's a flatline which does not seem right considering the error I am seeing.
We seem to have fixed the problem by closing all our indexes up until March 1st. My guess is the queuing has to do with the number of indexes if tries to search for each request. I noticed when I viewed the logs that it does not limit the indexes by date despite the time window requested for a search.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.