SIEM Signals not triggering

SIEM signals are not triggering even after the events are generated. But same KQL query used in signal is working fine in discover tab. Please help me to resolve this issue.

In the upcoming release we tightened up our error reporting and bubbling up of error messages to help with a lot of these issues.

In the meantime can you post your source index mapping, exported rule(s), and a sample of your data set that you would expect to match which does not and I can run it through our latest and should be able to report back what the issue is.

Also let us know what version of the stack you're using if you could.

1 Like


event.type: “success authentication”

5min interval given in signal to run this and 1min loop back.

Index name: xxx-xxx-xxx-*

Would it be possible to give us the mapping from dev tools like so:

GET auditbeat-7.8.0/_mapping

And then a sample of full records like so?

GET auditbeat-7.8.0/_search

I'm using auditbeat as an example but you would want to change that out for your index that you are using. Dev tools will be under management if you have permissions for it:

1 Like

“hits” : [
“_index”: “xxx-xxx-date”,
“_type”: “_doc”,
“_id”: “xxx”,
“_score”: 1.0,
“_source”: {
“sequence”: {
“number”: “25001”
“event”: {
“type” : “message rejected indication”,
“description”: “”xxxxxxx””
“type”: “snmptrap”,
“session”: {
“id”: “xxxx” },
“message”: “xxxxxxxxx”} },


Inde name : {
“mappings”: {
_mets: {},
Properties: {
type: date
@version: {
type: “ text”
fields: {
Keyword : {
type: “keyword”,
“Ignore_above” : 256
“Snmpv2::enterprise”: {
“Properties”: {
“18494”: {
Properties ............
........ etc.

Would it be possible to be able to post the two full JSON structures via a copy and paste? The full messages copy and pasted verbatim will help us determine how many issues are from the custom indexes all at once to reduce back and forth or any mistakes from re-typing.

I am assuming this is a custom mapping and not from one of the Elastic Agents/beats such as auditbeat.

However, from the first parts which are posted you do have what looks like timestamp instead of @timestamp:

    type: date

I would validate your mapping against the ECS standards from here:

Sorry I am not able to share exact copy of mapping.

About timestamp comment, in mapping its same as in the format which you have to mentioned.

type: date

Hi @jancodenew, welcome to our community!

Stepping back a bit, I want to ensure that you are aware of Elastic Common Schema (ECS).

The Elastic SIEM/Security app, including its detection rules, signals, and detection alerts, requires your data to be indexed in an ECS-compliant format. ECS is an open source, community-developed schema that specifies field names and Elasticsearch data types for each field, and provides descriptions and example usage.

The easiest way to get your data in ECS-compliant format is to use an Elastic-supplied beat module, (e.g., filebeat) or Elastic Agent integration, which will by default, ingest and index your data in an ECS-compliant format. Elastic provides a growing list of these integrations that you can find on our Integrations page.

If you're using a custom data ingestion method (beat, Logstash, Ingest node pipeline), or one provided by a third-party, then you may need to convert your data so that it is in an ECS-compliant format before you can use the SIEM/security app. This can be done by creating your own beat/module, or your own Logstash configuration for each data source, which will convert your data to ECS during the ingestion process.

General guidelines for creating ECS-compliant data:

  1. Each indexed document (e.g., your log, event, etc.) MUST have the @timestamp field.
  2. Your index mapping template must specify the Elasticsearch field data type for each field as defined by ECS. For example, your @timestamp field must be of the date field data type, etc. This ensures that there will not be any mapping conflicts in your indices, which could stop signals from being created.
  3. The original fields from your log/event SHOULD be copied/renamed/converted to the corresponding ECS-defined field name and data type.
  4. Additional ECS fields, such as the ECS Categorization fields SHOULD be populated for each log/event, to allow proper inclusion of your data into dashboards and detection rules.

A list of the specific ECS fields used by the SIEM/Security app is provided in this reference.

Sorry for the information dump, but we've found that non-ECS-compliant data is a common root cause for users who experience problems getting their SIEM/Security app rules/signals to work.

Please let us know if this is helpful.


It requires at the very least a @timestamp field to work for ECS. Outside of that value having to be there, you can still use a different field other than @timestamp and use that in our overrides fields under advanced for step 2. You still should have a @timestamp field indexed and mapped in as much of the security solutions rules and features rely on that @timestamp value being there.

To let you know, when 7.10.0+ rolls out we have improved our error reporting and handling for your use case which is you cannot divulge the sensitive nature of your mapping but the error reporting will give you information about more errors in the UI.

1 Like

Thank you for your support.

There are only below options in the “About Rule” page.
Name, description, severity, risk score, tags, advanced settings : reference url, false postive, mtre att&ck , investigation guide.

Could you please let me know from which page I have to select timestamp override as mentioned in your screenshot ?

In my index @timestamp field is already indexed and mapped according to ECS complaint format.

The timestamp override function was introduced in the 7.9.0 release, and is found in the advanced settings section of the About rule section, when creating or editing a rule.

More details are available in the docs.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.