We have an alert that checks if there is a controllermanager.master
, it's configured to use the document count
function, see the screenshot below.
It started to report "no data" and triggered the alert after we created it, and this was resolved when we added the kubernetes.controllermanager.leader.is_master
in the filter conditions. Then I removed the additional condition in order to recreate the problem and it's gone. The alert doesn't report "no data" anymore and it's configured exactly as in the beginning.
This is not the first time we had an alert that reports "no data" to be fixed by just editing the filters and then restoring to the original configuration. We run on Elastic 8.4.
Have you spotted a similar problem?
Is there a way in Kibana or in the logs that we can debug such issues and obtain more information for the alert queries?