Trigering Alerts for Machine learning Jobs

Hi,
I have created a Machine learning "Job" helping me identify anomalies in traffic for endpoints. Some endpoints are triggering a score range of 97, 98, 99 % . The job is started and runs.

I have at the same time created a Siem rule and pointed/used the machine learning Job name i have created, as well as set an alerting threshold for 75%.

However, I fail to get any alerts at all?,
Surely the 97 %, 98 % and 99% endpoints should have trigered a detection for me, right?.
Or are there perhaps some steps i am missing out on that is obvious?
Any advice would be awsome!

Regards
/Mikael

1 Like

Tbh Ive noticed a similar issue some time ago. Curious to hear Elastics feedback or other users experiences. What version are you on?

It's possible that this is related to the lookback time of your ML rule.

Because of the way that ML detection works, anomalies might not be found (or, more precisely, confirmed as anomalous) until some time after the inciting event(s) occurred. Due to this latency, the recommendation is to adjust your Detection Engine rule's lookback time to cover this "adjustment/finalization" period, ensuring that anomalies are captured as alerts. This is an instance of the late-arriving events issue.

The generally recommended formula is:

rule_lookback = 2 * bucket_span + query_delay

in order to capture anomalies once they've been finalized. The linked blog post above does an excellent job at explaining those parameters and their relation to rule execution, so I highly recommend giving it a read.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.