Error while querying elasticsearch

Ho to overcome the following issue? Whether increasing the search queue capacity or any other config parameter will help?

Error: Request to Elasticsearch failed: {"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"","phase":"fetch","grouped":true,"failed_shards":[],"caused_by":{"type":"es_rejected_execution_exception","reason":"rejected execution of org.elasticsearch.action.search.FetchSearchPhase$1@10a82f77 on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5407c1e8[Running, pool size = 7, active threads = 7, queued tasks = 1000, completed tasks = 110165]]"}},"status":503} at http://10.20.12.50:5601/bundles/kibana.bundle.js?v=15063:237:1333 at Function.Promise.try (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:91:24383) at http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:91:23753 at Array.map () at Function.Promise.map (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:91:23708) at callResponseHandlers (http://10.20.12.50:5601/bundles/kibana.bundle.js?v=15063:237:949) at http://10.20.12.50:5601/bundles/kibana.bundle.js?v=15063:236:20482 at processQueue (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:38:23621) at http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:38:23888 at Scope.$eval (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:39:4619) at Scope.$digest (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:39:2359) at Scope.$apply (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:39:5037) at done (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:37:25027) at completeRequest (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:37:28702) at XMLHttpRequest.xhr.onload (http://10.20.12.50:5601/bundles/commons.bundle.js?v=15063:37:29634)

Hi,

Which command you are using for serching into elasticsearch?
also is you indices have data or only index is created ?
please check

How many indices/shards are you targeting with each search? How many searches do you have running concurrently?

Around 4456 shards as it is time based index. And I see this error from the dashboard where around 6-8 visualizations are present.

How many data nodes in the cluster? What is your average shard size?

Also this error has been seen while doing the search from discover page (Single request) and with time frame 10-15 minutes.

Unless you have a quite large number of data nodes, my guess is that you have too many small shards. When you refresh a dashboard, a task basically need to be created for every shard and every visualisation in the dashboard, and these are queued up to be processed. with 4456 shards being queried and 8 visualisations, you get over 35k tasks, which fills up all search queues quickly.

The answer to this is not to increase the queue size, but rather to change how you index and shard your data. Try to make sure that your average shard size it at least a few GB in size, but ideally between 10GB and 30GB.

1 Like

Sure. Thanks.

This is quite a common issue, so I created a blog post with some guidelines around shards and sharding.

1 Like

Awesome blog. Thank you.

I have seen the following links and wanted to check if this issue is also related to the index templates/mappings.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.