I've set up a Metric Threshold alert on some fairly infrequent metric data (which actually comes from the Kubernetes Elastic Agent integration). The filter on the metric data is:
host.name:host* AND event.dataset: kubernetes.event AND kubernetes.event.type: Warning
However, I'm getting alerts for no data even though this makes no sense given I'm only interested in a document count >= 1. I also see that the checkboxes for enabling alerts on no data are greyed out, but checked. Why can these not be disabled?
Otherwise, this kind of alert is useless.
The natural language use case statement is: I want to be alerted if there is 1 or more Kubernetes warnings within the period of 10 min.
If so, I see two way that you could get rid of that enabled checkbox:
Temporarily add a condition that is not based on a "document count" metric. This should re-enable the checkbox so you can uncheck it. Afterwards you can remove the temporary condition again.
Re-create a new identical alert but without the box checked from the beginning.
Hey @weltenwort. It's very possible that I was experimenting with other types of conditions in the same rule. I'll try what you have suggested and report back, thank you!
@weltenwort Temporarily changing the condition to enable the checkbox, uncheck and change the condition back to doc count worked correctly, thank you.
This seems like a bug to me, and I've actually already made a comment on that old PR asking about why this checkbox was disabled, since clearly it still "works" and is relevant in some use cases (albeit redundant). If we think it's a bug, I can open a new issue on the repo if needed.
Glad to hear it worked. I'm personally not sure why it was disabled for "document count" conditions, but I think it should at least be possible to toggle it off regardless. That part definitely feels like a UX bug.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.