We have a daily index with about 30K unique devices.
From Kibana we request the unique counts per day, per version.
If the date range is large enough, the client node gets an out of memory, and the node should be restarted.
I saw that it is still not possible to have a request breaker for a client node (see
Improved Request Circuit Breaker
The query that is done by Kibana is basically an _msearch, with a lot of indexes. I guess that this is one part of the problem (requesting them all together)
My question is: what options do I have to limit the memory usage for the aggregations?
Below is the query:
{
"query": {
"filtered": {
"query": {
"query_string": {
"query": "*",
"analyze_wildcard": true
}
},
"filter": {
"bool": {
"must": [
{
"query": {
"query_string": {
"query": "Country:CZ",
"analyze_wildcard": true
}
}
},
{
"range": {
"ts": {
"gte": 1456483506632,
"lte": 1457088306632,
"format": "epoch_millis"
}
}
}
],
"must_not": []
}
}
}
},
"size": 0,
"aggs": {
"2": {
"date_histogram": {
"field": "ts",
"interval": "1h",
"time_zone": "GMT+0",
"min_doc_count": 1,
"extended_bounds": {
"min": 1456483506626,
"max": 1457088306626
}
},
"aggs": {
"3": {
"terms": {
"field": "version",
"size": 5,
"order": {
"1": "desc"
}
},
"aggs": {
"1": {
"cardinality": {
"field": "deviceid",
"precision_threshold": 30000
}
}
}
}
}
}
}
}