I have created elastic metric threshold rule to monitor on the disk usages of each node on elastic deployments it self and enabled alertOnData so if the query returned no data or query execution had an issue I would get alerted. But,
-
If add some broken filtering to query and rule is not creating alert for no data eventhoug the graph shows no data
-
I removed the metrics index pattern from the Observability > Metrics Explorer > Settings to cut out the data completely. Still no alert but graph shows no data. I am using .monitoring-* index pattern in the settings
{
"rule_type_id": "metrics.alert.threshold",
"name": "test_alert",
"tags": [
],
"consumer": "alerts",
"schedule": {
"interval": "1m"
},
"actions": [
{
"id": "action_uuid_1",
"group": "metrics.threshold.nodata"
}
],
"params": {
"criteria": [
{
"aggType": "custom",
"comparator": ">=",
"threshold": [
80
],
"timeSize": 5,
"timeUnit": "m",
"customMetrics": [
{
"name": "A",
"field": "elasticsearch.node.stats.fs.summary.total.bytes",
"aggType": "avg"
},
{
"name": "B",
"field": "elasticsearch.node.stats.fs.summary.available.bytes",
"aggType": "avg"
},
{
"name": "C",
"field": "elasticsearch.node.stats.fs.summary.total.bytes",
"aggType": "avg"
}
],
"equation": "((A-B)/C)*100",
"label": "Disk Usage",
"warningComparator": ">=",
"warningThreshold": [
60
]
}
],
"sourceId": "default",
"alertOnNoData": true,
"alertOnGroupDisappear": false,
"groupBy": [
"elasticsearch.node.name"
],
"filterQueryText": "elasticsearch.node.roles : \"aaaaaaaaaaaaa\"",
"filterQuery": "{\"bool\":{\"should\":[{\"term\":{\"elasticsearch.node.roles\":{\"value\":\"aaaaaaaaaaaaa\"}}}],\"minimum_should_match\":1}}"
}
}
Just trying to understand why alertOnNoData is not working as I expected.