Hi Elasticsearch Community,
I’ve created several Single Metric Machine Learning (ML) jobs, each configured to monitor sum of bytes for specific IP addresses. Due to the large volume of data, I’ve fine-tuned each job, and the bucket span varies between 3 to 5 minutes for different jobs, depending on the data characteristics.
The ML jobs are running smoothly, and anomalies are being detected as expected—these are clearly visible in the Anomaly Detection charts. The issue arises with my alerting setup: I’ve configured anomaly detection alerts for each of these ML jobs, set to trigger email notifications when an anomaly is detected.
However, I’m receiving significantly fewer email alerts than expected. While anomalies are displayed correctly in the Single Metric Viewer, not all are translating into email notifications. Interestingly, the alerts themselves are marked as successful in Kibana, but the email notifications only go out sporadically.
Could this be an issue with how the alerting condition is set up (e.g., severity threshold, lookback range, or anomaly score)? I would greatly appreciate guidance on what might be causing this disparity and how to ensure that all relevant anomalies trigger email alerts.