Hey,
So we have a query like the following:
{
"sort": {
"date.untouched": "desc"
},
"query": {
"constant_score": {
"filter": {
"bool": {
"must": [
{
"range": {
"_cache": false,
"date.untouched": {
"gte": "2012-10-16T18:33:42Z",
"lte": "2013-11-10T05:31:15Z"
}
}
},
[
{
"term": {
"_cache": false,
"foo.id": 1
}
},
{
"term": {
"bar.id": 1,
"_cache": false
}
}
]
]
}
}
}
},
"size": 75
}
and we are seeing responses back that take something like this:
{
"took" : 1334,
"timed_out" : false,
"_shards" : {
"total" : 8,
"successful" : 8,
"failed" : 0
},
"hits" : {
"total" : 8138,
"max_score" : null,
"hits" : [ {
...
Now if I drop the range query:
{
"sort": {
"date.untouched": "desc"
},
"query": {
"constant_score": {
"filter": {
"bool": {
"must": [
[
{
"term": {
"_cache": false,
"foo.id": 1
}
},
{
"term": {
"bar.id": 1,
"_cache": false
}
}
]
]
}
}
}
},
"size": 75
}
It returns results super fast
{
"took" : 53,
"timed_out" : false,
"_shards" : {
"total" : 8,
"successful" : 8,
"failed" : 0
},
"hits" : {
"total" : 7257,
"max_score" : null,
"hits" : [ {
...
Now what are we doing wrong with the range filter or is this just the
expected performance with it. A little info on our setup, this is against
a single index (for the testing I used different index each time to prevent
hitting cache results) and we have filter cache disabled because of the
heap memory needed for it. We have enabled filter cache in the past (for
small testing) and it seems to make little difference. We split the index
by time (index-2013.11, index-2013.10 etc) and if the range is smaller (say
just a day) its also pretty quick. But if the range is a large date gap it
seems to slow everything down.
Any idea?
Thanks
Zuhaib
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.