Index ".async_search" size growing fast

Hello everyone,

On a Elasticsearch and Kibana cluster that is monitoring another Elasticsearch cluster, we found that the size of the ".async_search" index on the monitoring cluster is growing fast (about 2GB in only one hour) in a very low system use scenario.
The only current use in this monitoring cluster is the execution of some KQL queries over the “filebeat*” index. For this reason, we believe that this index is storing temporary data for supporting Kibana queries. ¿What is the purpose of this index? ¿Is there any way to manage how works in terms of reducing used space?

Elasticsearch and Kibana 7.9.3.
Filebeat 7.10.1.

Thanks in advance.

Hi,
Could you share the index statistics of the .async-search index ? It should output the number of documents in the index as well as the number of deleted ones. That should tell us if it's an accumulation problem or if the responses that are stored are very big.
In general, Kibana will delete the async search immediately so you shouldn't see the index growing indefinitely. However when you navigate away from Kibana, some searches may continue to run which is why all requests have a keep alive of 1 minute. If Kibana doesn't poll the status during that minute the search is cancelled but not deleted from the index. The actual deletion of this tombstones is made by what we call the maintenance service. By default it runs every hour to cleanup the index but you can change the value to be more aggressive. It's a cluster setting so you can update your yml config with something like "async_search.index_cleanup_interval": "1m" and restart your nodes. I am not sure that this will fix your issue but it's worth trying and we should have a better view of the problem with the index statistics.

1 Like

Thank you very much for your reply.

Sorry for the delay, but we had to redeploy our cluster for other reasons...

After adding the parameter "async_search.index_cleanup_interval", the index size did not increase as it did. Therefore, the problem is solved.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.