Detection rule kquery will not trigger but the query match

Hi
I have a weird issue when use kquery in detection rules.
I use a simple query to match a field and triggering action for that but the rule will not be triggered while its query is matching in the specified interval.
I could see that the query is matching in the "preview result" pane but on the rule monitoring it shows the rule successfully executed but no results in detections.

another weird point is when I delete the rule and recreate it, it will match but it seems it will match only once.

I'm using ElasticCloud with the default configuration. I see no errors in rule monitoring only successful messages.

1 Like

Hi @leon3, welcome to our community!

We're glad you are trying out the Elastic Security solution, and hopefully we can get to the bottom of your rule questions and get your detections running smoothly.

Please allow me to start with a very basic question, how are you determining that the rule is not triggering?

If you are able to share your rule with us (be sure to mask out any confidential information), that could help us spot anything that might be contributing to your issue.

Also, please let us know a little bit about your environment:

  1. What version of the Elastic Stack you are running in Elastic Cloud?
  2. How are you sending your data into Elastic Cloud?
  3. Is your data represented in an ECS-compliant format?

Meanwhile, here are some monitoring and troubleshooting tips for when you are missing alerts.

One thing to keep in mind is that the "Quick query preview" function during rule creation excludes the effects of rule exceptions and timestamp overrides. So if your rule has either of these applied, that could lead to a difference in behavior between the preview query and the rule execution.

Another idea: You mention a "triggering action" for the rule. Be sure to check out your rule Actions frequency settings. If you select one of the settings below, you will only get the notification one time per (hour/day/week).

Hi @Mike_Paquette @leon3

I am facing kinda same issue. Can you please help

Here is the link of issue:

Hi @aditi_salunke I think your issue is different (you are receiving failure messages - @leon3 is not) and appears, as you suspect, to be caused by a mapping conflict in your data, with regards to the host field.

I want to make sure you are aware that the Elastic SIEM/Security app, including its detection rules, signals, and detection alerts, requires your data to be indexed in an ECS-compliant format. ECS is an open source, community-developed schema that specifies field names and Elasticsearch data types for each field, and provides descriptions and example usage.

The easiest way to get your data in ECS-compliant format is to use an Elastic-supplied beat module, (e.g., filebeat or Elastic Agent integration), which will ingest and index your data in an ECS-compliant format. Elastic provides a growing list of these integrations that you can find on our Integrations page.

Where is your data coming from? Perhaps there's an integration that can help?

General guidelines for creating ECS-compliant data:

  1. Each indexed document (e.g., your log, event, etc.) MUST have the @timestamp field.
  2. Your index mapping template must specify the Elasticsearch field data type for each field as defined by ECS. For example, your @timestamp field must be of the date field data type, etc.. This ensures that there will not be any mapping conflicts in your indices.
  3. The original fields from your log/event SHOULD be copied/renamed/converted to the corresponding ECS-defined field name and data type.
  4. Additional ECS fields, such as the ECS Categorization fields SHOULD be populated for each log/event, to allow proper inclusion of your data into dashboards and detection rules.

Here's a simple graphic that I created to help get this point across.

Please let us know is this helps.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.