Hi, I was wondering if applying custom rules on machine learning jobs altert the ML model? or the ML model stays the same, just those anomalies are not shown?
It can do either/or/both.
Skip result = don't create an anomaly if the condition is met
Skip model update = don't alter the model if the condition is met
Thanks Rich, So if in a metric I have values from 1 to 100 and I choose to skip the model update and skip values over and under, in a way ML will "know" that my metric moves in that range and it will give me better anomaly results?
not quite understanding your question... If your metric is in the range of 1 to 100 (always?) then it doesn't make sense to skip over/under if they never go over/under. Or maybe, you're trying to protect yourself from badly reported data?
If the metric NORMALLY is in the range of 1 to 100.
But sometimes it would go under/over, which is an anomaly.
Does skipping model update give me better result?
And should we skip model update in case of anomaly? is it a good practice?
Elastic ML already tries to not let anomalous values overly affect the model - so trying to manage it on your own with custom rules, while possible, is likely not really worth the effort.
However, with that said, if you ever get into a position where you feel like the model is "wrecked" by some unruly data, you can revert the model to a previous snapshot (to a time before your unruly input data): Model snapshots | Machine Learning in the Elastic Stack [7.14] | Elastic