I use the following query:
{
"aggs": {
"2": {
"terms": {
"field": "deviceID.keyword",
"order": {
"_key": "desc"
},
"size": 500000
},
"aggs": {
"1": {
"top_hits": {
"_source": "deviceIP",
"size": 1
}
}
}
}
},
"size": 0,
"_source": {
"excludes": []
},
"stored_fields": [
"*"
],
"script_fields": {},
"docvalue_fields": [
{
"field": "@timestamp",
"format": "date_time"
}
],
"query": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gte": "now-30d"
}
}
}
],
"filter": [
{
"match_all": {}
}
],
"should": [],
"must_not": []
}
}
]
}
}
}
I am getting the following exception, which is quite self-explanatory.
"reason" : {
"type" : "too_many_buckets_exception",
"reason" : "Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.",
"max_buckets" : 10000
}
Can I basically split the query into batches, so that I don't have to increase the max_buckets setting? Or is the only way to get the IP for like 50k devices to increase the max_buckets setting?