I've been running an ML job to detect high amount of data stream.
Here's specific configuration of the job.
Detectors: high_count partition_field_name=src_ip bucket_span: 1d
So the thing is that the 'severity' of anomalies sometimes change after being detected by the job.
For example, today at 11AM an alert showed up notifying that an anomalous activity was detected with severity of 91, influenced by src_ip: 10.10.10.10.
But when I checked it again at 3PM the severity went down to 32.
I thought severity score could go up as time goes by & log stacks up (since the job detects anomaly based on log count).
But is it possible for the severity score to go down as well?
I'd appreciate any opinion/answer.
Thanks in advance