Strange behavior of "no data" alerts

We have an alert that checks if there is a controllermanager.master, it's configured to use the document count function, see the screenshot below.

It started to report "no data" and triggered the alert after we created it, and this was resolved when we added the kubernetes.controllermanager.leader.is_master in the filter conditions. Then I removed the additional condition in order to recreate the problem and it's gone. The alert doesn't report "no data" anymore and it's configured exactly as in the beginning.

This is not the first time we had an alert that reports "no data" to be fixed by just editing the filters and then restoring to the original configuration. We run on Elastic 8.4.

Have you spotted a similar problem?
Is there a way in Kibana or in the logs that we can debug such issues and obtain more information for the alert queries?

Hey there @EDzhelyov, thanks for reaching out.

This is a known issue that we have identified and fixed with this PR: [Actionable Observability] Verify missing groups for Metric Threshold rule before scheduling no-data actions by simianhacker · Pull Request #144205 · elastic/kibana · GitHub.

This change is set to be released in our next version, 8.6.0. 8.6.0 is scheduled to be released on December 13th.

Hope this helps!

Coen

1 Like