Anomaly Jobs - General Strategies to reduce false positives

Hello everybody,

i'am struggling to baseline an anomaly detection job (filters are not really helping so far).

What are the general strategies or factors which drive the model to become, simply put, more "insensitive" ?

I know there is a trade-off here in detection capacity, however at the moment it is not usable at all with the amount of alerts after baselining.

Hi, the first 1-2 weeks of results I generally ignore and count it as the model building/learning.

From 2 weeks on I start looking into the anomalies and go from high anomaly score down in weeks having the model learn.

This usually does the trick for me to get to alright enough results.

1 Like

please define what you mean by "false positive". In other words, if you can articulate why something isn't properly anomalous, then you can likely build a filter or a custom rule to eliminate it.

Also, like @sholzhauer says, anomalies are scored with a severity for a reason - so that one can limit which anomalies are "alerted upon". One may not pay attention, at all, to any anomaly with a score < 25 for example.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.