Actions firing, but nothing shows up under alerts

I've got a rule that triggers based on an arbitrary query and fires a slack message action. This works great, but no alerts show up under the Rule in Stack management, and also not seeing anything under Observability > Alerts. My understanding is that I should see alert history under both of these. Am I wrong, or do I perhaps have something configured incorrectly? I'm currently on Elastic Cloud 7.16.3.

Hi @jeremyross

Welcome to the community :slight_smile:
Could you elaborate a little bit?
What rule type are you using?
What are you seeing when you look at the rule in Stack Management?

Thanks. The rule type is Elasticsearch query. The rule shows as active, and zero alerts listed. Screen shot attached

Thanks Jeremy!

That's surprising :thinking:
Could you share your configuration perhaps?

It could also be worth querying the Event Log to see what events appear there for the rule in question.

What's the best way to share the config?

Here is the result of the alert query. Not much here:

{
  "took": 121,
  "timed_out": false,
  "_shards": {
    "total": 8,
    "successful": 8,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 2,
      "relation": "eq"
    },
    "max_score": 1.0,
    "hits": [
      {
        "_index": ".kibana-event-log-7.13.3-000006",
        "_type": "_doc",
        "_id": "OBMEw30BXSKYfxtAxcAg",
        "_score": 1.0,
        "_source": {
          "@timestamp": "2021-12-16T11:33:29.499Z",
          "event": {
            "provider": "eventLog",
            "action": "stopping"
          },
          "message": "eventLog stopping",
          "ecs": {
            "version": "1.8.0"
          },
          "kibana": {
            "server_uuid": "191380ba-d7f0-4abd-98a8-d09efe06f6ce"
          }
        }
      },
      {
        "_index": ".kibana-event-log-7.13.3-000006",
        "_type": "_doc",
        "_id": "OhMFw30BXSKYfxtAIsBj",
        "_score": 1.0,
        "_source": {
          "@timestamp": "2021-12-16T11:33:52.449Z",
          "event": {
            "provider": "eventLog",
            "action": "starting"
          },
          "message": "eventLog starting",
          "ecs": {
            "version": "1.8.0"
          },
          "kibana": {
            "server_uuid": "191380ba-d7f0-4abd-98a8-d09efe06f6ce"
          }
        }
      }
    ]
  }
}

ok here's the config:

{
    "id": "5ec9f730-7894-11ec-90e6-ed536f5574a3",
    "notifyWhen": "onThrottleInterval",
    "consumer": "alerts",
    "tags": [],
    "name": "JGR error threshold",
    "throttle": "1h",
    "enabled": true,
    "alertTypeId": ".es-query",
    "apiKeyOwner": "admin",
    "createdBy": "1943489943",
    "updatedBy": "admin",
    "muteAll": false,
    "mutedInstanceIds": [],
    "schedule": {
        "interval": "5m"
    },
    "actions": [
        {
            "group": "query matched",
            "params": {
                "message": "*'{{alertName}}'* is active:\n\n- Error count: {{context.value}}\n- Conditions Met: {{context.conditions}} over {{params.timeWindowSize}}{{params.timeWindowUnit}}\n- Timestamp: {{context.date}}"
            },
            "actionTypeId": ".slack",
            "id": "a8b37da0-788d-11ec-90e6-ed536f5574a3"
        }
    ],
    "params": {
        "esQuery": "{\n    \"query\": {\n        \"bool\": {\n            \"must\": [\n                { \"term\": { \"name\": \"CamelExchangesFailed\" }},\n                { \"range\": { \"count\": {\"gt\": 0 } }}\n            ]\n        }\n    }\n}\n",
        "size": 100,
        "timeWindowSize": 60,
        "timeWindowUnit": "m",
        "threshold": [
            0
        ],
        "thresholdComparator": ">",
        "index": [
            "jgr2-integration-metrics*"
        ],
        "timeField": "@timestamp"
    },
    "updatedAt": "2022-02-21T15:44:57.125Z",
    "createdAt": "2022-01-18T19:25:18.336Z",
    "scheduledTaskId": "5f6e7490-7894-11ec-90e6-ed536f5574a3",
    "executionStatus": {
        "status": "active",
        "lastExecutionDate": "2022-02-21T16:35:13.934Z",
        "lastDuration": 169
    }
}

This seems weirdly empty :thinking:
What index have you run this query against?

Did you use the index pattern: /.kibana-event-log*/_search?

It looks like you might have only queried the 7.13.3 index.

ok I don't think my api tool was sending the body since it was a GET request. Here's the request and response:

Request

GET /.kibana-event-log*/_search HTTP/1.1
Authorization: Basic 
Content-Type: application/json
Host: {}.us-east-1.aws.found.io:9243
Connection: close
User-Agent: Paw/3.3.5 (Macintosh; OS X/12.1.0) GCDHTTPRequest
Content-Length: 987

{
  "sort": [
    {
      "@timestamp": {
        "order": "desc"
      }
    }
  ],
  "query": {
    "bool": {
      "filter": [
        {
          "term": {
            "event.provider": {
              "value": "alerting"
            }
          }
        },
        // optionally filter by specific action event
        // filter by specific rule id
        {
          "nested": {
            "path": "kibana.saved_objects",
            "query": {
              "bool": {
                "filter": [
                  {
                    "term": {
                      "kibana.saved_objects.id": {
                        "value": "5ec9f730-7894-11ec-90e6-ed536f5574a3"
                      }
                    }
                  },
                  {
                    "term": {
                      "kibana.saved_objects.type": "alert"
                    }
                  }
                ]
              }
            }
          }
        }
      ]
    }
  }
}

Response

{
  "took": 3,
  "timed_out": false,
  "_shards": {
    "total": 8,
    "successful": 8,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 0,
      "relation": "eq"
    },
    "max_score": null,
    "hits": []
  }
}

That can't be right :thinking:
Getting no hits suggests your Even Log indices are empty.... which would explain why you aren't seeing alerts, but shouldn't be happening.
The id and type look right..... so you should be getting back docs for every execution of the rule, and every action fired.

Have you changed the ILM policies on these indices by any chance?

No, I've just done basic configuration of the rules and actions. I am getting slack messages, so the actions are working. Is there anything else I can check?

GET /.kibana-event-log*/_count
results in

{
  "count": 2,
  "_shards": {
    "total": 8,
    "successful": 8,
    "skipped": 0,
    "failed": 0
  }
}

Could you look through your Kibana server log and see if any errors or warning appear in there?
It sounds like your system is working but isn't writing event log entires correctly, which would explain why you cant see alerts, but is very unusual (I've never see this before).

There are no errors or warnings, only INFO level. I skimmed through them and didn't see anything relevant.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.