avencar posted a very good question, that is exactly what we're seeing. I'll quote here:
Hi Matthew,
I appreciate the response. We do have a time filter field name selected for the index pattern. Also, we have a large number of nodes and data, each of which is carrying a heavy burden (> 1000 active shards / node). I know we're pushing the limits here, but it still bothers me that Kibana isn't restricting the time-based indices it's searching over when we know the data is only contained within those indices.
My main question is why was this feature removed to begin with? More specifically, what changes were made in elasticsearch to make querying over all indices/shards so efficient compared to elastic v2? It seems like we'd still have to iterate over all documents to see if it falls within the requested time range.
Thanks again!
We're running 6.7.1, we have a total of 42 nodes of various types. We currently have below 300 shards per node.
The issue is that too many shards are being hit with these queries. If we do a search for the last 15 minutes, only one index should be hit, not everything that matches the pattern.
And no, we can't turn on the slow query log at the level where it catches everything. We already know this is happening to every query run by Kibana.
(If we use the api, we can duplicate the issue by using log_*. log_2019.09.01 works as expected.)
Hello Dave!
By default we run a pre-filter phase to determine if an index contains data matching your time filter, but only if you hit more than 128 shards. Is your search request hitting more than 128 shards?
If yes, your response should look something like this. Notice the skipped count:
I forgot to mention that you can at least test this by changing the limit with a request in Devtools using the APIs. If a query like this works, then we definitely know what the issue is
We can duplicate the issue via the API, so it's not just Kibana. If we do a dated search, it's 3-5 times faster than using the index pattern. Since our users use Kibana, that's what we're focusing on.
It shouldn't add that much overhead to check which indices contain the time range. It should be cached somewhere, actually. (if this was a user defined field, then yes, it would make sense).
As it is, doing a global search for just the last 15 minutes causes the kibana elasticsearch plugin to time out, causing kibana to restart it, which takes that entire kibana node out for about 10 minutes.
The old method where Kibana determined indices to query based on the index name was quite efficient but had a number of limitations.
The link between the index name and the timestamp in the documents had to rely on a single set field, @timestamp. This meant that all indices always had to be queried if the index pattern was based on any other timestamp field.
It relies on the application(s) indexing into Elasticsearch to set the index name correctly based on the '@timestamp' field, which may not always be guarantees unless Beats or Logstash is used.
It does not with more flexible indexing schemes, e.g. rollover.
I therefore think a more flexible solution is required. Caching the min and max value for date fields in each index, probably in the cluster state, would make it a lot faster to determine exactly which indices that need to be queried. This would however cause a lot of cluster state updates, which is not good.
There is however a mechanism to explicitly make indices read-only, and I believe this is supported by ILM. If caching the min and max values of date fields could be done whenever an index is made read-only, this would ensure the cached values stay relevant for longer and dramatically reduce the number of cluster state updates. For use-cases with long retention period it should be possible to make indices read-only after a while and possibly greatly improve query performance as only a small number of indices would need to be always queried.
Can you share a specific example of two searches via the API that do the same thing but exhibit the performance difference we're discussing, cutting Kibana out of the equation? Can you then share the results for each (at least the took and _shards fields as above)? Then can you profile those searches to help us see where the time is actually being spent? There'll be more output from that than this forum can cope with, so use https://gist.github.com (or similar) to share it.
The time range for each shard is already (effectively) cached for quick access: this is how the pre-filter phase mentioned above works. In addition, Elasticsearch optimises a range-based search into a match_none search on shards that don't contain any docs that match the range, and match_none searches should run pretty quickly. The profiler output should help to show why these two mechanisms are not doing what we want.
Also did these two searches yield the same documents? The time range you used, 1567627030000 to 1567627040000, occurred on 2019-09-04 whereas the one-day search you did was for 2019-08-04.
A colleague pointed me to #40263 which was merged into 6.7.2 and which might speed up queries that hit a lot of indices. Can you upgrade and see if this helps?
I would say that you have too many shards here by about a factor of 4. None of these indices is large enough to warrant three primaries. There's only two that exceed 40GB (suggesting a need for ≥ 2 primaries) and none exceed 80GB (suggesting ≥ 3). Many of them are measured in MB not GB and those could reasonably be monthly indices instead.
Yes, and now that we have access to ILM (we've recently updated to 6.x), we'll be able to clean that up. That's not the issue.
The issue is that elastic spends way too much time determining that (for example) the only index that might have documents in the desired time range is log_syslog-2019.09.04. It spends 20 times the time checking all the other log_syslog-* indices, when the pre-search should short circuit that.
If I open two months of logs instead of just one, that is be 40 times longer. After three months, things just time out.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.