Adding a custom field in alerts without defining in query

I have a field called in my logs. I have created a rule based on the field externalId only. However I want that the field should also be fetched when the alert is generated. I don't want to explicitly define this field in the rule query since if i add a new organization I would have to update all my rules again. Is there a way I can achieve this?


Heya @PD98 welcome to our community! Thanks for the post.

No, you do not have to add a field name to your rule in order for it to be populated in the detection alert.

For SIEM/Security detection rule types that identify a unique source log event. (e.g. Custom Query, Event Correlation, Indicator Match) any ECS-defined field that is present in your source event will be automatically copied into the detection alert.

Since is an ECS-defined field, you should be all set.

Stepping back a bit, I want to ensure that you are aware of Elastic Common Schema (ECS).

The Elastic SIEM/Security app, including its detection rules, signals, and detection alerts, requires your data to be indexed in an ECS-compliant format. ECS is an open source, community-developed schema that specifies field names and Elasticsearch data types for each field, and provides descriptions and example usage.

The easiest way to get your data in ECS-compliant format is to use an Elastic-supplied beat module, (e.g., filebeat or Elastic Agent integration), which will ingest and index your data in an ECS-compliant format. Elastic provides a growing list of these integrations that you can find on our Integrations page.

If you're using a custom data ingestion method (beat, Logstash, Ingest node pipeline), or one provided by a third-party, then you may need to convert your data so that it is in an ECS-compliant format before you can use the SIEM/security app. This can be done by creating your own beat/module, or your own Logstash configuration for each data source, which will convert your data to ECS during the ingestion process.

General guidelines for creating ECS-compliant data:

  1. Each indexed document (e.g., your log, event, etc.) MUST have the @timestamp field.
  2. Your index mapping template must specify the Elasticsearch field data type for each field as defined by ECS. For example, your @timestamp field must be of the date field data type, etc.. This ensures that there will not be any mapping conflicts in your indices.
  3. The original fields from your log/event SHOULD be copied/renamed/converted to the corresponding ECS-defined field name and data type.
  4. Additional ECS fields, such as the ECS Categorization fields SHOULD be populated for each log/event, to allow proper inclusion of your data into dashboards and detection rules.

A list of the specific ECS fields used by the SIEM/Security app is provided in this reference.

Sorry for the information dump, but we've found that non-ECS-compliant data is a common root cause for users who experience problems getting their SIEM/Security app rules/signals to work.

Please let us know if this is helpful.

Hi Mike,

Thanks for the welcome as well as your reply.
Regarding the ECS field, I have ensured that the field is present in the logs, as well as the mandatory fields mentioned in your post like @timestamp with their correct mappings. The detection rule is based on a single query for externalId. However, when that event is triggered, I am still not able to see the field being populated in the alert.

Hi @PD98,
Thanks for confirming your data's ECS compliance and mappings.

I don't have data with populated, so I have not tested to see if the field shows up in my alerts.

Would you please provide a bit more data:

  • Where are you viewing the alert when not seeing the field? (e.g. Timeline, Discover?)
  • Is this the only field you've noticed missing?
  • Do you use the field in your logs? Is it present in the alert?
  • Would you be able to share your mappings for the index into which these events are ingested?
  • How about an actual log event?


Hi Mike,
I'll try to answer your points as well as explain briefly as to what I am trying to achieve.

  • I am viewing the alert in the Detections tab under Security. I am trying to add 1 more ECS column apart from the default ones (like Rule, Version, Severity, etc.) in the form of But the alert is not populating/fetching the field from the log. The only thing that seems to work is adding in the rule query explicitly (eg. externalId : XXXX and : YYYY) after which the column is populated. But this is not feasible since for every new organization added, I will have to change every rule.
  • is the only ECS field I am trying as of now.
  • No, I am not using field in my logs as of now.
  • The following is the mapping for organization field -
"organization": {
          "properties": {
            "name": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
              "norms": false
  • I am attaching a snippet of a log which shows the field.


Hi @PD98,

Thanks for the additional information. You should not have to include an ECS field in your detection rule query in order to have that field show up in detection alert documents. So let's look for something that might be amiss:

There may be an issue with your mapping for the field.
Your mapping creates a traditional multi-field data type, however ECS treats multi-field data types differently than traditional Elasticsearch mappings.

Check out the ECS Sample index mapping template here

"organization": {
        "properties": {
          "id": {
            "ignore_above": 1024,
            "type": "keyword"
          "name": {
            "fields": {
              "text": {
                "norms": false,
                "type": "text"
            "ignore_above": 1024,
            "type": "keyword"

You will see that the naming of the text and keyword portions of the field are swapped relative to your mapping. This could be causing a conflict when the signal documents are being created, which could cause that field not be properly mapped into your detection alerts.

Could you try updating your index mapping template accordingly and see if that helps?

Hi Mike,

I updated my index mapping for organization, but still the is not being populated.

Hi @PD98

I updated my index mapping for organization, but still the is not being populated.

Sorry to hear that.
Can you let me know what version of the Elastic Stack you are using?

Please excuse me if you've already done these things, but let me ask a few more questions:

After updating your mapping per ECS, did you:

  1. refresh the fields in your index pattern?
  2. ingest more logs containing
  3. have a new detection alert created by the rule execution?
  4. opened the resulting detection alert in expanded view?
  5. verified that the field is not present?

refreshing fields in your log index pattern after updating mapping (filebeat-* index pattern shown)

Can you look at the detection alert event in Kibana Discover to see if the field is present ?

  • detection alerts are stored in a system/hidden index called .siem-signals-<space>, where <space> is the name of the Kibana space you are using when you created the rule. For example if you are using the Default space, your index will be .siem-signals-default
  • you will need to create a new Kibana index pattern in order to be able to view your detection alerts in Discover.

creating a new index pattern to view detection alerts (aka signals)

using the newly created index pattern in Discover to look at detection alerts


Hi @Mike_Paquette,

I am using version 7.9.2 of the Elastic Stack.

  1. Yes, I refreshed the index pattern.
  2. Yes, The newly ingested logs do contain field.
  3. Yes, fresh alerts were created by rule execution.
  4. Opened alert in expanded view, but was not present.

Thanks for the guide for creating index pattern for signals index. I followed that and inspected the alert in kibana discover, but field was not present there either.

Thanks for the information. Sorry you're continuing to have trouble.

Our engineers tried this in a 7.9.2 build and it seems to be working as expected. We didn't include in the rule query and the signal does contain in it.

here's the simple rule query we used

_and here's a detection alert in the alerts table showing the field.

So we've verified that there's no inherent problem or bug, so we're back to suspecting something relating to your index mappings.

Are you able to send the complete mapping for the index in which your logs are ingested?
Also, can you confirm the index pattern(s) used in your detection rule? In fact, do you mind exporting your detection rule and posting the exported rule?

Hi Mike,

I am using the index pattern logstash-* in my detection rule.
I am also posting the complete index mapping, in which i changed the mapping for organization, as well as the exported rule.
(P.S. The forums have a character limit so I am posting a pastebin link which contains my index mapping)

Index Mapping =>

Exported Rule =>

{"author":[],"actions":[],"created_at":"2020-11-05T10:24:43.640Z","updated_at":"2020-11-19T12:12:07.319Z","created_by":"elastic","description":"Adversaries may exploit software vulnerabilities in an attempt to collect elevate privileges. Exploitation of a software vulnerability occurs when an adversary takes advantage of a programming error in a program, service, or within the operating system software or kernel itself to execute adversary-controlled code. Security constructs such as permission levels will often hinder access to information and use of certain techniques, so adversaries will likely need to perform privilege escalation to include use of software exploitation to circumvent those restrictions.\n\nWhen initially gaining access to a system, an adversary may be operating within a lower privileged process which will prevent them from accessing certain resources on the system. Vulnerabilities may exist, usually in operating system components and software commonly running at higher permissions, that can be exploited to gain higher levels of access on the system. This could enable someone to move from unprivileged or user level permissions to SYSTEM or root permissions depending on the component that is vulnerable. This may be a necessary step for an adversary compromising a endpoint system that has been properly configured and limits other privilege escalation methods.","enabled":true,"false_positives":[],"filters":[],"from":"now-120s","id":"0a08f1a5-9857-41e1-ab84-5cd5b176ce56","immutable":false,"index":["logstash-*"],"interval":"1m","rule_id":"7e01639d-9158-4d41-af39-2cc6487de8fa","language":"kuery","license":"","output_index":".siem-signals-default","max_signals":100,"risk_score":50,"risk_score_mapping":[],"name":"Exploitation for Privilege Escalation (T1068)","query":"externalId : \"4672\"","references":[],"meta":{"from":"1m","kibana_siem_app_url":""},"severity":"low","severity_mapping":[],"updated_by":"elastic","tags":[],"to":"now","type":"threshold","threat":[{"framework":"MITRE ATT&CK","technique":[{"reference":"","name":"Exploitation for Privilege Escalation","id":"T1068"}],"tactic":{"reference":"","name":"Privilege Escalation","id":"TA0004"}}],"threshold":{"field":"deviceAddress","value":1},"throttle":"no_actions","timestamp_override":"@timestamp","version":44,"exceptions_list":[]}


Hi @PD98, thanks for sending in the details.

We noticed that your rule is a "Threshold" type rule. Threshold rules are based on Elasticsearch aggregations, and their detection alerts do not return all the details of all of the source events that are aggregated to produce the detection alert.

Specifically, if your threshold rule query returns results that have multiple different values of a field, such as, there is not a single value of the field to add to the detection alert (signal), so it is left blank.

This would explain why when you included the term AND "xyz" in your query, that the field appears in the signal - because all the documents returned in the aggregation have the same value of

If you create a new rule of type "Custom query", then we would expect the fields from the underlying source events (your logs) to be included in the detection alert.

selecting the Custom query rule type during rule creation

Also, as mentioned earlier in this thread, in order to fully use the Security App, you need to get your data converted to ECS format.

Can you share what is the data source from which these logs are being collected?

Hi @Mike_Paquette

Perfect explanation. Thanks a lot.
Regarding the data sources, currently I am ingesting logs from arcsight connectors into