Date range in query, yet all indices that match pattern are searched

Description of the problem including expected versus actual behavior : Queries time out in Kibana due to what appears to be a problem where the time picker is ignored and all indices matching the index pattern are queried rather than just the ones that hold data in the time range of the time picker.

Our Elasticsearch cluster has tiered storage nodes in a hot/warm/cold architecture with data moving off of the hot nodes after 5-7 days depending on the index. I'm noticing that our Cold Elasticsearch nodes that only have indices with data that are >7 days old are getting these queries for every index that matches the index pattern and timing out as they attempt to look through 100s of GB or TBs of data. I would expect that only the hot nodes with indices that hold data from the last 15 minutes would get these queries.

First noticed after upgrading Elasticsearch and Kibana from 6.7.1 to 7.1.1. Persisted after upgrading to 7.3.0 and moving to ILM indices.

See also: Long request time after upgrade to 7.1 and https://github.com/elastic/kibana/issues/43417

Steps to reproduce :

  1. Add a filter to the Discover page, e.g. hostname:"server1" with the time picker set to the default "Last 15 minutes".
  2. Query frequently times out.
  3. After timing out, refreshing will often return results quickly.

The Kibana inspector shows a valid date range in the query, like:
"range": {
"@timestamp": {
"format": "strict_date_optional_time",
"gte": "2019-08-20T11:43:13.790Z",
"lte": "2019-08-20T11:58:13.790Z"
}
}

Yet if I search in Kibana for something like hostname:"server1", I'll see on my cold ES nodes entries in slowlog like this:

[2019-08-20T06:58:19,376][TRACE][i.s.s.query ] [usfitesa31] [system-log-cloud-2.0-2019.06.28][0] took[1s], took_millis[1009], total_hits[10000+ hits], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[280], source[{"size":0,"timeout":"1000ms","terminate_after":100000,"query":{"match_all":{"boost":1.0}},"aggregations":{"suggestions":{"terms":{"field":"hostname","size":10,"shard_size":10,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"execution_hint":"map","order":[{"_count":"desc"},{"_key":"asc"}],"include":"server1.*"}}}}], id[],

Note that it's searching an index from 2019.06.28 when my query from Kibana was submitted today (2019.08.20) for hits in the last 15 minutes.

Hi all, we're continuing to be plagued by this issue. I can consistently see queries showing up in slowlog on our warm/cold nodes searching indices from a month ago when my query in Kibana is for the last 15 minutes of data.

Anyone have thoughts on where I can look next to determine why Elastic is searching every index that matches the pattern rather than the ones that hold data in this time period?

Or even a resource that describes how Elastic determines which indices to search for a particular query that's bounded by a date range?

Found that this was caused by KQL's auto-suggest queries running against all indices that match the index pattern and not just the indices that match the time range selected. Disabled filterEditor:suggestValues to Off in Kibana's advanced settings and performance returned to normal.

See this thread for more detail: KQL-related performance issue

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.