Alert not being triggered for Uptime duration anomaly rule

Hey guys,
I'm trying to trigger alerts for uptime duration anomaly using the webhook connector. Although I see the anomaly for the monitor as shown in the screenshot below, but no alert rule is triggered.
I have configured the alert rule and webhook properly. Could you please suggest how could I resolve this ? Also could you suggest some endpoint/website that I could monitor which could provide me with an anomaly of critical severity.

Can you please confirm the following:

  1. The Sample frequency of your uptime monitor (for example, in heartbeat.yml here it shows 60s):

  1. The Sample frequency of your Anomaly Alert configuration:

  1. The bucket_span of the ML job that was created for you:

  1. The version of the Elastic Stack you are using

Also, note that I'm using the URL to monitor. I'm sure if I collect several days of data on this URL (to get ML to learn the typical response time of this URL) and then switch the heartbeat.yml configuration to query, it should result in a critical anomaly in that the response time will be more than 10 seconds whereas it was typically under 100ms.

Hi Rich,

Actually I'm using Elastic synthetics integration (using elastic agent). While configuring the integration I set monitor interval to 60 sec. And while configuring the alert I set sample frequency to 1 minute. Also the bucket span is 15m. The deployment version is 7.16.2. Also, on some jobs I see this warning - "Datafeed has been retrieving no data for a while".

Are you potentially testing the synthetics agent on a laptop that occasionally sleeps?

I think there may also be a bug here as to why the alert is not firing. Researching now. Will update when I know more.

Sure Rich.

It is true - there is a bug with the default alert logic if that alert is created from the Uptime UI. However, if you instead create the alert from the ML UI for the job that the Uptime app creates, then the alert works properly.

Alert example result: