Hello everyone,
I started an anomaly detection job with the following detector:
mean("datapoint.value") partition_field_name="datapoint.external_id"
The bucket_span
is 15m
and new documents are index every 5 minutes.
The results have been fantastic. However, I'm facing an issue when I try to start a forecast. The forecast starts at different times.
For most partition_fields
, the forecast begins right at the last point where data was loaded into the index. But for others, it starts 1 to 4 hours later, and in one example, even 2.5 hours in the past.
Is this a configuration error, or could it be intentional that the forecast only shows values when it’s confident they could be accurate? However, this wouldn't explain why some results are in the past.
I've already recreated the job with different configurations and visualized the forecast data in a dashboard to confirm it wasn't an issue with the Single Metric Viewer. However, the same partition_fields
keep showing the same time shifts.
Thanks for your help!