ML Anomaly Detection jobs gives very low score for absense of events

Hi,
I like to monitor # of specific logs event and raise an alert when flow is out of boundaries. Initially I was going to use moving average for it but found that Elastic does not support it and ML anomaly detection is recommended solution

So I configured Single Metric ML job to detect anomalies. And, unfortunately, when events stop flowing - ML Job did not raise an alert because it assigned Score 4 to this anomaly:


But when IT fixed the problem and events flow resumes - it was an expected spike of buffered events and for this case Score was 92, so the alert was raised.

Why Score for missing events was so low? I need this ML model to raise alerts exactly for missing events or dropping flow.

Hello @GlebCA ,

if you are specifically interested in the cases when the event count goes down, you should use low_count.

To trigger alerts when the event count drops to 0, it's easiest to use Kibana alerting and not ML.

Thank you, Valeriy. I already have it in place and it notified IT about the issue :slight_smile:

But I like to have alert when event flow deviates from "normal" without hardcoded values, because this "normal" depends on day of week and # of customers, signed for the service

New examples of potential anomalies were collected - ML still assigns too low Severity when values is below expected boundaries:


Actual value is 10 (TEN) times less then expected but ML job gives this anomaly only Severity 21 :frowning:

Hello @GlebCA ,

To see what is going on, please post the screenshot of the Single Metric Viewer with the model plot activated for the time frame around 2024-10-05 and the configuration of the anomaly detection job.

Found possible reason - need to use detector low_count instead of 'count'. Going to give it a try

1 Like