Watcher sending alert when condition not met

I want to send an alert if a cron job has not run in the last 10 minutes. I set the following up in watcher:

"query": {
            "bool": {
              "must": [
                  "terms": {
                    "host.hostname": [
                      “the “hostname
                  "range": {
                    "@timestamp": {
                      "gte": "now-10m"
              "must_not": {
                "match": {
                  "message": “cron log entry that job ran”
              "filter": {
                "term": {
                  "log.file.path": "/var/log/cron"

Once or twice a day I get an alert but the entry is in cron log. any suggestions on how to improve the alert?

Thank you

So, what is the reason for the alert happening, and the entry still showing up in the logs?

You can check the watcher history indices when the watch was triggered and see what the search returned.

Is it possible that ingestions of your data took longer than 10 minutes?

Thanks for responding. The watcher runs every 5 minutes and it checks the cron log for the prior 10 minutes. The cron job runs every 5 minutes. I had an alert sent at 2:07 today. The cron ran at 2 pm and 2:05. I checked the Logs interface in kibana. The entry has a timestamp of:
2020-06-30T18:05:09.001Z .

It does not appear to be an ingest issue . How can I confirm it check all log records for the last ten minutes?

If the timestamp is the timestamp when the event has happened on the original system (like a log entry being generated), you don't have any guarantee that this event has been ingested within any timeframe (i.e. using an ingestion timestamp via a pipeline).

I am still not sure you ruled my above scenario out, but happy to be corrected!

How can I rule out that it is a delay in the logs being ingested?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.