How to write the watcher condition & trigger rule

hello! this is the domain&netflow picture. now, i want to sent e-mail when "Sum of bytes" >50,000.000(50MB) netflow .

i copy the request to my configure .
i read offical website long times ...
"condition": {
"script": "if (ctx.payload.aggregations.minutes.buckets.size() == 0) return false; def latest = ctx.payload.aggregations.minutes.buckets[-1]; def node = latest.nodes.buckets[0]; return node && node.cpu && node.cpu.value >= 75;"
},
this example is same with my configure , i think it will match this bytes... not dashbord "Sum of bytes",so i don't know how to compute "Sum of bytes " . ctx.payload.aggregations.bytes? ctx.payload.hits.total? or
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 50,000.000 }}
},
?
and then, a day has gone by again. i trys any ways , but it doesn't still success. i'm going to crazy!!>_<
i can't fixed error ....
i edit my configure in the morning...
{
"trigger" : {
"schedule" : { "interval" : "5s" }
},
"input" : {
"search" : {
"request" : {
"indices" : [ "access-2016.02.24" ],
"body" : {
"query" : {
"filtered": {
"query": {
"query_string": {
"query": "*",
"analyze_wildcard": true
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gte": "now-5m",
"lte": "now"
}
}
}
],
"must_not":
}
}
}
},
"size": 0,
"aggs": {
"date_histogram": {
"field": "@timestamp",
"interval": "5s",
"time_zone": "Asia/Shanghai"
},
"aggs": {
"3": {
"terms": {
"field": "type",
"size": 3,
"order": {
"1": "desc"
}
},
"aggs": {
"1": {
"sum": {
"field": "bytes"
}
}
}
}
}
}
}
}
}
},
"throttle_period": "5s",
"condition" : {
"compare" : { "ctx.payload.aggregations.bytes.size()" : { "gt" : 50000000} }
},
"actions" : {
"send_email" : {
"email" : {
"to" : "kim@qq.com",
"subject" : "netflow is too high",
"body" : "netflow is too high "
}
}
}
}

@spinscale please help me !! foreign friends !

Thanks a lot!!

Hey there,

this was a lot of content :slight_smile:

let's try to take a step back and analyze this before we go on.

First: How do you documents look like? Does each field have a document that contains a field bytes that needs to be summed up to find out the total? Can you provide an example document? Just to make sure we are talking about the same.

If my above assumption is correct, then using the sum aggregation makes sense. However your condition seems to be a bit off using the size() method. What exactly do you want to sum up? You already got the sum by type? If you need to the total sum, you could also just execute another aggregation. If you want to sum up manually, than size()will not help you as it is a groovy method trying to get the size of a list/map.

You should take a step back and try to debug things. First make sure, your result is as expected. Then, when you start writing your watch, you should make use of the Execute Watch API to check, if your watch behaves as expected. This allows much faster debugging cycles.

Hope this helps.

--Alex

Thank you very much ,Alex. now i upload a picture for you


and then i edit my configure again...
i imitate here to write my configure
but it is error again..
please help me watch again. thanks !
:slight_smile:

Hey,

just posting screenshots makes it impossible to follow your problem based on a step-by-step base. Please provide examples, that makes it easy to not only follow, but to also recreate the example locally by copying it into a local installation of elasticsearch/watcher.

See the help document for further hints on how to provide examples, which can be easily read, understood and debugged by anyone in the forum.

Thanks a lot! Looking forward to help!

--Alex

yeah, i change the ways to alert logstash->mongodb->zabbix . my enginsh is so bad ,i can't express mean very well. but i'm so thank you .:slight_smile: