when I query like this:
{
"size" : 0,
"query" : {
"match_all" : { }
},
"aggregations" : {
"filter" : {
"filter" : {
"match_all" : { }
},
"aggregations" : {
"createdAt_1h" : {
"date_histogram" : {
"field" : "createdAt",
"interval" : "1h",
"time_zone" : "Asia/Shanghai"
},
"aggregations" : {
"sum(num)" : {
"sum" : {
"field" : "num"
}
},
"sum(price)" : {
"sum" : {
"field" : "price"
}
}
}
}
}
}
}
}
and if search tps is high, the node will show memory issue.
so I'd like to ask:
the coordinator of aggregration like this will only get mid-results like this:
"buckets": [
{
"doc_count": XXX,
"sum(num)": {
"value": XXX
},
"sum(price)": {
"value": XXX
},
"key": 1476806400000
},
and sum them up or the coordinator get raw doc and sum them up in its memory?
and if it is latter, should we open a issue to optimize it?