Kibana Alerting - Randomly Triggering Same Time

Hi there,

We have Kibana set up and running on a 6.2 Elasticsearch cluster that we use to store and monitor all of our production logs also using Logstash to parse and post to the ES cluster. We recently set up a few alerts on the Alerting feature and every morning at 9:28 am GMT+1 for some reason the alert is triggered unexpectedly even though when you manually search the query that it is using there is no results that match the query.

The alert uses an extraction query that searches all indicies under the wildcard phplogs-*. The extraction query is as follows:

{
"size": 500,
"query": {
    "bool": {
        "must": [
            {
                "match_all": {
                    "boost": 1
                }
            },
            {
                "match_phrase": {
                    "priority": {
                        "query": "CRITICAL",
                        "slop": 0,
                        "boost": 1
                    }
                }
            },
            {
                "range": {
                    "@timestamp": {
                        "from": "now-60s",
                        "to": "now",
                        "include_lower": true,
                        "include_upper": true,
                        "format": "epoch_millis",
                        "boost": 1
                    }
                }
            }
        ],
        "adjust_pure_negative": true,
        "boost": 1
    }
},
"version": true,
"_source": {
    "includes": [],
    "excludes": []
},
"stored_fields": "*",
"docvalue_fields": [
    "@timestamp",
    "git_date"
],
"script_fields": {},
"sort": [
    {
        "@timestamp": {
            "order": "desc",
            "unmapped_type": "boolean"
        }
    }
],
"aggregations": {
    "2": {
        "date_histogram": {
            "field": "@timestamp",
            "time_zone": "Europe/London",
            "interval": "1s",
            "offset": 0,
            "order": {
                "_key": "asc"
            },
            "keyed": false,
            "min_doc_count": 1
        }
    }
},
"highlight": {
    "pre_tags": [
        "@kibana-highlighted-field@"
    ],
    "post_tags": [
        "@/kibana-highlighted-field@"
    ],
    "fragment_size": 2147483647,
    "fields": {
        "*": {}
    }
}

}

All it does is pretty much search over the last 60s of logs and see if there were any occurrences of a log with the priority CRITICAL. The alert is then triggered off the following trigger condition:

ctx.results[0].hits.total > 0

We have had occurrences where the alert has worked as expected and actually caught valid logs with the priority CRITICAL. However, there are times when the alert triggers unexpectedly even though none of the logs match the search criteria. Just wondering if anyone was experiencing similar issues or could shed some light on the problem.

Thanks!

Hey,

are you using the official Alerting from elasticsearch or something else? I dont think the condition works with alerting, thus the question.

--Alex

Hi,

It's just occurred to me that we are using the Opendistro version of Elasticsearch as is provided by AWS's Elasticsearch Service. Closing this and moving the conversation over there.

Thanks,
Brandon

1 Like