Time range query performance [7.6]

Hello!

While investigating the performance of time range queries, I found some surprising things, and was wondering if anyone had insights to share as to what optimisations or heuristics happen behind the scenes that might explain (and perhaps ideas about the best way to build such queries).

For a bit of context: we make heavy use of time-sorted indices and time range filters, and at least in some cases it seems that the range filter slows queries down significantly. While investigating whether the improvements in ES 7.6 would help us (with that and in general), I still found that range queries could be problematic and was puzzled by how it behaved.

I then worked out this contrived but minimal example highlighting the situation in the hope that it can be reproduced and explained. Pasting in a golang program further below.

Essentially setting up an index with a single @timestamp date field and index-sorting turned on. Then filling it with documents where the value is randomly picked, uniformly within some fixed time range.

Hopefully this table of results speaks for itself. Happy to provide any further clarifications otherwise. Times in milliseconds. The documents are all in the range [2020-01-01, 2020-02-01), and the date on the rows is the upper limit set in the query. profiling: false/true does mean that profiling of the search is turned off/on! Using ES 7.6.1.

The absolute performance of course varies with the number of documents, size of the ES node(s), etc. but the qualitative behaviour seems consistent.

testing query
{
  "range": {
    "@timestamp": {
      "from": null,
      "include_lower": true,
      "include_upper": false,
      "to": "2020-01-01"
    }
  }
}
date \ profiling         false      true
2019-09-23                 0.0       0.0
2020-01-01                 0.2       0.0
2020-01-04                 4.3       6.9
2020-01-07                11.1      11.4
2020-01-10                15.2      15.1
2020-01-13                20.3      16.1
2020-01-16                20.2      19.6
2020-01-19                21.8      27.0
2020-01-22                16.1      15.8
2020-01-25                10.1      14.0
2020-01-28                 6.5       7.0
2020-01-31                 2.2       3.0
2020-02-01               105.1       0.1
2020-05-11               177.0       0.0

testing query
{
  "bool": {
    "filter": {
      "range": {
        "@timestamp": {
          "from": null,
          "include_lower": true,
          "include_upper": false,
          "to": "2020-01-01"
        }
      }
    }
  }
}
date \ profiling         false      true
2019-09-23                 0.1       0.0
2020-01-01                 0.1       0.9
2020-01-04                 7.8       5.2
2020-01-07                15.1      10.4
2020-01-10                22.6      18.0
2020-01-13                31.7      18.3
2020-01-16                37.7      19.9
2020-01-19                37.0      20.0
2020-01-22                39.0      16.4
2020-01-25                36.2      13.3
2020-01-28                34.2       7.8
2020-01-31                33.2       4.8
2020-02-01               174.9       0.0
2020-05-11               175.7       0.0

And here's a go program for doing this:

package main

import (
	"context"
	"encoding/json"
	"flag"
	"fmt"
	"log"
	"math/rand"
	"os"
	"time"

	"github.com/olivere/elastic/v7"
)

const (
	indexName = "testindex"
	timeField = "@timestamp"

	numDocs        = 1e7
	progressUpdate = 1e6
)

var (
	dateMin = newDate(2020, 1, 1)
	dateMax = newDate(2020, 2, 1)
)

func main() {
	var (
		generate = flag.Bool("generate", false, "generate test data")
		query    = flag.Bool("query", false, "run the query test")
		address  = flag.String("address", "http://localhost:9200", "elasticsearch URL")
	)
	flag.Parse()
	client := newClient(*address)
	if *generate {
		generateData(client)
	} else if *query {
		queryTest(client)
	} else {
		flag.Usage()
		os.Exit(2)
	}
}

type esClient struct {
	ctx context.Context
	*elastic.Client
}

func newClient(address string) esClient {
	client, err := elastic.NewSimpleClient(
		elastic.SetURL(address),
		elastic.SetGzip(true),
	)
	if err != nil {
		panic(err)
	}
	return esClient{
		ctx:    context.Background(),
		Client: client,
	}
}

func generateData(client esClient) {
	client.DeleteIndex(indexName).Do(client.ctx)

	indexSettings := map[string]interface{}{
		"index.number_of_replicas": 0,
		"sort.field":               timeField,
		"sort.order":               "desc",
	}
	indexMappings := map[string]map[string]map[string]string{
		"properties": {
			timeField: {"type": "date"},
		},
	}
	settings := map[string]interface{}{
		"settings": indexSettings,
		"mappings": indexMappings,
	}
	_, err := client.CreateIndex(indexName).
		BodyJson(settings).
		Do(client.ctx)
	if err != nil {
		panic(err)
	}

	log.Println("created index", indexName)

	bulk, err := client.BulkProcessor().
		BulkActions(-1).
		Do(client.ctx)
	if err != nil {
		panic(err)
	}
	defer func() {
		if err := bulk.Close(); err != nil {
			panic(err)
		}
	}()

	for i := 0; i < numDocs; i++ {
		d := time.Duration(rand.Int63n(dateMax.Sub(dateMin.Time).Nanoseconds()))
		doc := map[string]interface{}{
			timeField: dateMin.Add(d),
		}
		r := elastic.NewBulkIndexRequest().
			Index(indexName).
			Doc(doc)
		bulk.Add(r)

		if num := i + 1; num%progressUpdate == 0 {
			log.Printf("%10d docs", num)
		}
	}
}

func queryTest(client esClient) {
	for _, asBoolQ := range []bool{false, true} {
		printLine := func(endDate date) {
			q := newQuery(endDate, asBoolQ)
			fmt.Printf("%-20s%10.1f%10.1f\n", endDate, client.meanQueryTime(q, false), client.meanQueryTime(q, true))
		}

		fmt.Printf("testing query\n%s\n", queryToString(newQuery(dateMin, asBoolQ)))

		fmt.Printf("%-20s%10s%10s\n", "date \\ profiling", "false", "true")

		step := int(dateMax.Sub(dateMin.Time) / (240 * time.Hour))

		printLine(dateMin.AddDays(-100))
		for dt := dateMin; dt.Before(dateMax.Time); dt = dt.AddDays(step) {
			printLine(dt)
		}
		printLine(dateMax)
		printLine(dateMax.AddDays(100))
		fmt.Println()
	}
}

func (client esClient) meanQueryTime(q elastic.Query, profile bool) float64 {
	n := 10

	tot := 0
	for i := 0; i < n; i++ {
		client.clearCache()
		tot += client.queryTook(q, profile)
	}
	return float64(tot) / float64(n)
}

func (client esClient) clearCache() {
	_, err := client.ClearCache(indexName).Do(client.ctx)
	if err != nil {
		panic(err)
	}
}

func (client esClient) queryTook(q elastic.Query, profile bool) int {
	search := client.Search(indexName).
		Query(q).
		Profile(profile).
		Size(1)
	resp, err := search.Do(client.ctx)
	if err != nil {
		panic(err)
	}
	return int(resp.TookInMillis)
}

type date struct {
	time.Time
}

func newDate(y, m, d int) date {
	return date{time.Date(y, time.Month(m), d, 0, 0, 0, 0, time.UTC)}
}

func (dt date) String() string {
	return dt.Format("2006-01-02")
}

func (dt date) AddDays(days int) date {
	return date{dt.AddDate(0, 0, days)}
}

func newQuery(endDate date, boolQ bool) elastic.Query {
	var q elastic.Query
	q = elastic.NewRangeQuery(timeField).Lt(endDate.String())
	if boolQ {
		q = elastic.NewBoolQuery().Filter(q)
	}
	return q
}

func queryToString(q elastic.Query) string {
	src, err := q.Source()
	if err != nil {
		panic(err)
	}
	data, err := json.MarshalIndent(src, "", "  ")
	if err != nil {
		panic(err)
	}
	return string(data)
}
1 Like

I am not sure what are you trying to highlight here. One thing you need to be aware of is that queries are getting cached. That is the reason why in your the second run (profiler ==true) is faster than the first one.

Hi! Thanks for getting back to me.

Sorry, the details of the test are all in the go code. I should probably have explained what it's doing:

  • Each query is run 10 times, then reporting the average.
  • Calling /{index}/_cache/clear before each query.
  • Requesting size: 1 for each query.

Regarding query caching, I thought with profile: true the cache isn't used anyway (and indeed that's what I see: not clearing the cache doesn't matter for searches with profiling on).

What I want to highlight:

  • When the range of documents in the index is fully contained within the search range, the query is suddenly much slower, unless profiling is turned on.
  • When the range query is contained in a bool filter (which we typically do because we would not just filter based on time), the query takes longer, again, unless profiling is turned on.

I've now also tried with index-sorting off, and on 7.5.0: in these cases when profiling is on, the query is much slower (~500ms) when documents (but not all) are within the search range.

Hey again. Did the additional explanations clarify things enough?

Just to add more context and explain why this matters to us: on a larger test deployment where we have about 4.5 billion docs over ~70 indices (timestap-sorted) on 4 data nodes (running 7.5 this time, though), we see this for instance...

GET /search_alias/_search
{
  "query": {
    "range": {
      "@timestamp": {
        "from": "now-1M/d",
        "to": "now+1d/d"
      }
    }
  },
  "track_total_hits": false,
  "size": 1
}

takes easily 10-15 seconds, while

GET /search_alias/_search
{
  "query": {
    "match_all": {}
  },
  "track_total_hits": false,
  "size": 1
}

is (obviously?) near instant (~5ms). Same if we request timestamp-sorting of the results. And also similar if we add other filters to the queries.

From profiling, it seems that when doing a range query it has to check for the existence of @timestamp; could this explain why it takes so long?

Yes, so I've finally looked at some code and indeed this is what's happening:
between this (L457) and that (L477), when the range of the query covers the whole shard, a exists query is done instead, and it turns out these can be quite slow, sadly...

The whole thing with profiling is not clear at all: I've restarted an ES instance locally and all a few times, and sometimes I get the behaviour above, and sometimes the queries with profiling turned on are slow (few hundred ms) when the range intersects.

Do you have deleted documents in the shard?

Nope.

Here's another question: would it make sense for ES to use PointRangeQuery instead of TermRangeQuery? Has it not been considered yet because it's still marked as an 'experimental' API? Or is there another more subtle reason? Or just a matter of time?

That is telling me that the field @timestamp has been indexed as a keyword not as a date field. How did you generate the mappings?

Nope, it should really be a date.
In the toy example, the mappings are set at index creation here:

And if I query the index mappings it tells me indeed that

"mappings":{"properties":{"@timestamp":{"type":"date"}}}

There is something strange because performing a range query on a date field should be using the following query, which is a PointRangeQuery:
https://github.com/elastic/elasticsearch/blob/51cff748a758311f45c49c26c25ec22971e54031/server/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java#L378

If you are seeing a TermRangeQuery in the profiler then there is something strange. The first thing I would check is that you are really hitting this index and not another one that contains that field with different type. Could you share the output of the profiler?

Oh no sorry! I was asking about PointRangeQuery because I couldn't find it when I tried to look at the code. Thanks for pointing out where this happens... This part is probably fine.

The thing that still confuses me though is why exists is so slow and is done when a queried range encompasses a shard's range. (and potentially also why range queries seem slower in filter contexts)

I think the overhead you are seeing is the query actually getting cache. I have tried your example and I could reproduce it. In order to get the same speed with and without profile I have that run the following command to disable query caching:

POST /testindex/_close

PUT /testindex/_settings
{
    "index.queries.cache.enabled": false
}

POST /testindex/_open

Then if I run your script I get the following numbers:

date \ profiling         false      true
2019-09-23                 0.0       0.0
2020-01-01                 0.0       0.0
2020-01-04                 3.0       4.0
2020-01-07                 5.1       6.0
2020-01-10                 8.0       9.0
2020-01-13                10.4      12.7
2020-01-16                13.2      14.0
2020-01-19                13.1      13.9
2020-01-22                10.0      11.0
2020-01-25                 7.0       8.0
2020-01-28                 4.1       5.0
2020-01-31                 1.0       2.1
2020-02-01                 0.0       0.0
2020-05-11                 0.0       0.0
1 Like

Oooh, this is very interesting! Thanks!

So basically exists is so slow because it spends time caching all the documents (since they all have the timestamp field)? Still annoying, but at least it makes sense now.

Is there any way to disable the caching of specific terms of parts of queries, or something like this?

I am afraid you cannot disable parts on the query cache.

On the other hand, I have discussed this behaviour with other people and we agreed that we should not cache DocValuesFieldExistsQuery as the query is already very fast and the overhead makes little sense. This has been already fixed in the upstream project: https://issues.apache.org/jira/browse/LUCENE-9287

Amazing! Thanks a lot for the great news!

How long until it makes it into an ES release, though? :wink:

That is something I am not able to answer. Lucene 8.5 was actually released today so the change won't make it into Elasticsearch until Lucene 8.6 is released.

Ow, that's unfortunate. Oh well. That's life... Thanks again anyway!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.