Already deleted monitor is reporting monitor status to elasticsearch

I deleted a synthetics monitor but still receive new monitor results in the synthetics-* datastream. The monitor is also still visible in the old uptime app and the monitor is triggering alerts that are configured based on monitor tags. I can not delete the monitor via the ui, because its not visible and the synthetics app and I can not delete the monitor via the kibana api.

I am running elastic stack version 8.18.1.

How to get rid of the leftover monitor?

Have you tried deleting it directly from the index? If you are not familiar with the terminal, you can do this directly from Kibana via Devtools. First, you apply a GET to check the permanence of the index you want to remove.

First, identify the exact monitor ID with GET


GET /synthetics-*/_search
{
  "query": {
    "term": {
      "monitor.id": "your-monitor-id"
    }
  }
}

Then delete all documents for that monitor

POST /synthetics-*/_delete_by_query
{
  "query": {
    "term": {
      "monitor.id": "your-monitor-id"
    }
  }
}

Or try deleting through the Fleet API:

POST /api/fleet/agent_policies/delete_monitors
{
  "monitorIds": ["your-monitor-id"]
}

Deleting the data is not the problem. I need the monitor to stop reporting new data.
The endpoint /api/fleet/agent_policies/delete_monitors does not exist.

Sorry for asking again, did you remove the index or remove the policy? Because removing the index will cause it to re-execute, you must remove the policy and restart the agent to unlink the requests.

I am using elastic synthetics monitoring and therefore no agents and no agent policies.
The monitor was removed via the synthetics app of kibana.

@niecore can you please share monitor ID and from which location the monitor is running from?

europe-west3-a, da1c3a35-ca90-464d-a9ae-eb442ec644d5

@niecore thank you providing additional info, we are investigating it now.

@niecore as a follow up question, are you able to add new monitors in that location and are those running successfully?

@shahzad31 thanks for helping out here! Yes I am able to configure new monitors which are running correctly.

@niecore can you please run this query and see if the monitor still exists as saved objects

GET .kibana*/_search
{
  "size": 10000,
  "query": {
    "bool": {
      "filter": [
        {
          "term": {
            "type": "synthetics-monitor"
          }
        },{
          "term": {
            "synthetics-monitor.config_id": "da1c3a35-ca90-464d-a9ae-eb442ec644d5"
          }
        }
      ]
    }
  }
}

I noted in past you have been hitting the limit on number of monitors you can run on a location, if you need to increase limit you can raise a support ticket and team should be able to help you with that.

1 Like

The monitor seems to be not available in the saved objects:

#! this request accesses system indices: [.kibana_8.4.1_001, .kibana_8.4.2_001, .kibana_8.5.0_001, .kibana_8.6.0_001, .kibana_8.8.0_001, .kibana_alerting_cases_8.8.0_001, .kibana_analytics_8.8.0_001, .kibana_blob_storage, .kibana_entities-definitions-1, .kibana_ingest_8.8.0_001, .kibana_security_session_1, .kibana_security_solution_8.8.0_001, .kibana_task_manager_8.4.1_001, .kibana_task_manager_8.4.2_001, .kibana_task_manager_8.5.0_001, .kibana_task_manager_8.6.0_001, .kibana_usage_counters_8.17.0_001], but in a future major version, direct access to system indices will be prevented by default
{
  "took": 112,
  "timed_out": false,
  "_shards": {
    "total": 48,
    "successful": 48,
    "skipped": 47,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 0,
      "relation": "eq"
    },
    "max_score": null,
    "hits": []
  }
}

It also stopped reporting at May 20, 2025 @ 23:48:54.147.

Thanks for your help!