HeLLO every one,
Please , can someone explain this two values(actual and typical).
If i based my job on the max of a field(hdopDevice) why i cant see this value?!!
else when i select this period , i see an anomaly (orange point)
But when i select another period this point changed to blue
Your aggregation interval is much larger than your bucket span. So within the interval there will be a bucket that contains that value, but you would need to zoom in to see that in the viewer. These are two views of the same data. One is for one day, which uses both aggregation interval and bucket span of 5m. The true value of 3,900,000 is shown (the numbers on the left axis are truncated). The second one shows a week of data, so the aggregation interval grows to 30m, and the value shown drops to about 800,000.
Ahhh ok this is logic , thanks @Badger for your answer,
I have 2 another questions:
first in this graph
value, actual and typical : what are this values? why i can't see the mean(hdopDevice) : the aggregation i choose to create this job?!!!!!!
Seconde, like this graph show:
In the Overall, the color is yellow but when i try to visualise using an influencer i get 5 red squares ... There is a contradiction here... whith 5 red i must have in th overall one red also....
Please help ,
The values shown in the expanded row of the anomalies table in the Single Metric Viewer are the typical and actual values observed for the mean(hdopDevice) aggregation used in your detector, over the 30 minute bucket span of your job. It's usually best to ensure the aggregation interval used for plotting the chart matches the bucket span of the job - which you can do by clicking the 'auto' zoom link on the top left of the chart.
As @Badger pointed out, the different components in the Anomaly Explorer view display scores from the various result types. As well as the link mentioned, this blog contains more details on how the anomaly scoring works.
Hope this helps,