Synthetics monitor status rule — context.monitorName and context.locationName empty in recovery notifications after long-duration alert

Elastic Stack version: 8.19.4

Rule type: Synthetics monitor status

We have a Synthetics monitor status rule configured to alert when a monitor is down 3 times within the last 3 checks. The rule covers all monitors in the space

When an alert fires, the email notification arrives correctly with all context variables populated:

  • {{context.monitorName}} → xxx/yyy :white_check_mark:
  • {{context.locationName}} → xxxx

However, when the alert recovers after a long duration (in our case ~2 days), the recovery email arrives with all context variables empty:

  • {{context.monitorName}} → empty :cross_mark:
  • {{context.locationName}} → empty :cross_mark:

The email body is sent correctly (the action triggers), but all context.* variables are blank.

  1. Are context.monitorName and context.locationName expected to be available in recovery actions for Synthetics monitor status rules?
  2. Is there a known issue with context variables being lost in long-duration alerts (48h+) on recovery?
  3. Would enabling "Receive distinct alerts for each location" help preserve context in recovery?
  4. Is there an alternative variable (e.g. alert.id) that reliably contains the monitor name in recovery?

Thank you in advance.

After further investigation, we have gathered more data points that make this issue harder to reproduce consistently:

  • A different alert that was active for 6 days (April 7 → April 13) recovered correctly with all context variables populated :white_check_mark:
  • The problematic recovery (April 11 → April 13, ~2 days) arrived with all context variables empty :cross_mark:
  • The issue has also occurred in a separate space with only a single monitor configured, so grouping/aggregation is definitively ruled out as the cause
  • The "Receive distinct alerts for each location" toggle was enabled in the rule where the issue occurred, so location grouping is also ruled out

This means the issue is intermittent and not consistently reproducible, which makes it harder to diagnose. The failure does not seem to be related to:

  • Alert duration (6 days worked fine, 2 days failed)
  • Number of monitors (happened with a single monitor)
  • Location grouping settings (happened with distinct alerts per location enabled)

Has anyone seen this kind of intermittent behavior with Synthetics monitor status rules? Could this be related to a transient issue in the alerting task runner or a race condition during recovery evaluation?