Elastic Search Anomaly Detection - Influencing Anomaly Scoring


I am working with the Elasticsearch anomaly detection tools. I am looking for a way to advance the scoring of an anomaly based off a specific threshold. For instance, lets say we have a max on a number that represents the time it took to process in a server. Over 3500 MS to me is always indicating a bad event. I want to be alerted when over 3500 but I still want the AI to detect "its normal to get over 3500 at 5pm on Friday". Is there any way to train the AI to treat over this number as a higher hit in the anomaly score or is the goal that over time the AI will learn that we care about this 3500 MS. What if many of our records get close to this 3500 or if we currently have many cases where it is over that threshold?

I have learned that there are custom rules I can put in place to skip records that would fall into a certain range. This seems like a attempt at manually training the AI. Is there no way to do the reverse?

Sounds like the right solution for you here is a combination of an Anomaly Detection Job and a traditional alert rule running at the same time!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.