Watcher applying aggregations on all the documents, Need to apply aggregrations only from Now -2m to Now

Hi,

I am trying to implement watcher to monitor the the elastic stack components:
3 elasticsearch servers
2 logstash servers
1 kibana server

I need to aggregate the Filesystem Use Percent for each of the components for last two minutes and check whether the highest value is greater than say 49 percent.

But the aggregation is performed on over all documents so I am getting the highest and same value each time and not the highest value in last two minutes.

The watcher is..

{
"trigger": {
"schedule": {
"interval": "10h"
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"metricbeat*"
],
"types": ,
"body": {
"size": 1,
"query": {
"bool": {
"should": [
{
"match_phrase": {
"beat.hostname": "elsearchdv1"
}
},
{
"match_phrase": {
"beat.hostname": "elsearchdv2"
}
},
{
"match_phrase": {
"beat.hostname": "elsearchdv3"
}
},
{
"match_phrase": {
"beat.hostname": "logstashdv"
}
},
{
"match_phrase": {
"beat.hostname": "logstashdv2"
}
},
{
"match_phrase": {
"beat.hostname": "kibanadv"
}
}
],
"minimum_should_match": 1
}
},
"aggs": {
"range": {
"date_range": {
"field": "@timestamp",
"ranges": [
{
"to": "now-2m"
},
{
"from": "now"
}
]
}
},
"host": {
"terms": {
"field": "beat.hostname",
"size": 10,
"order": {
"pct": "desc"
}
},
"aggs": {
"pct": {
"max": {
"field": "system.filesystem.used.pct",
"script": {
"source": "doc['system.filesystem.used.pct'].value *100",
"lang": "painless"
}
}
}
}
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.aggregations.host.buckets.0.pct.value": {
"gt": 49
}
}
},
"actions": {
"send_email": {
"email": {
"profile": "standard",
"to": [
"mail_id.com"
],
"subject": "High FS Usage",
"body": {
"html": "{{ctx.payload}}"
}
}
}
}
}

The output is.....
{_shards={total=50, failed=0, successful=50, skipped=0}, hits={hits=[{_index=metricbeat-6.7.1-2019.06.20, _type=doc, _source={@timestamp=2019-06-19T19:00:07.742Z, system={network={in={bytes=8417351752231, dropped=375874, errors=0, packets=6010897628}, name=eth0, out={bytes=5720247058228, dropped=0, packets=1297085121, errors=0}}}, beat={hostname=elsearchdv1, name=elsearchdv1, version=6.7.1}, host={os={codename=Maipo, name=Red Hat Enterprise Linux Server, family=, version=7.6 (Maipo), platform=rhel}, containerized=true, name=elsearchdv1, id=c0c14a6b5a364b5eae58d506fdd61dca, architecture=x86_64}, metricset={rtt=266, module=system, name=network}, event={duration=266760, dataset=system.network}}, _id=M7QccWsB9ckpm5avTSGq, _score=6.07065}], total=51347588, max_score=6.07065}, took=1794, timed_out=false, aggregations={host={doc_count_error_upper_bound=0, sum_other_doc_count=0, buckets=[{pct={value=100.0}, doc_count=254135, key=logstashdv}, {pct={value=100.0}, doc_count=254953, key=logstashdv2}, {pct={value=73.2}, doc_count=221815, key=elsearchdv3}, {pct={value=71.1}, doc_count=217809, key=elsearchdv1}, {pct={value=70.0}, doc_count=220226, key=elsearchdv2}, {pct={value=42.9}, doc_count=50178650, key=kibanadv}]}, range={buckets=[{doc_count=51347588, to_as_string=2019-06-20T13:34:18.219Z, to=1.561037658219E12, key=-2019-06-20T13:34:18.219Z}, {from_as_string=2019-06-20T13:32:18.219Z, doc_count=12456, from=1.561037538219E12, key=2019-06-20T13:32:18.219Z-}]}}}

Hi,

In the bool child of your query, you need to filter on a time range, the one you're interested in.

See the example here for many ways to do it:

Maybe even try to start from one of them and change it into what you want piece by piece to learn more quickly by examples. Many aspects of ES queries are at play here and they all have their meanings. The time range your interested in doesn't go in the aggs part of your query. The different examples will cover much of those cases.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.