Enrich Logs Without Breaking ECS or Overriding Default Pipelines

Hi everyone,
I'm using the free version tier Elastic Stack v9.0.1 with the following setup: Kibana v9.0.1 - Elastic Agent (enrolled via Fleet) - Sysmon integration - Managed data streams like logs-windows.sysmon_operational-*

Goal:
I want to enrich all logs automatically (e.g., tagging superusers based on user.name) without altering ECS mappings or modifying the default Fleet/Agent processing Specifically, I want:
• To enrich on user.name using an enrich policy
• To preserve ECS fields like process.command_line, user.name, etc. not back go winlog…
• Not to break existing mappings by replacing them with raw fields like winlog.event_data.*

What I tried:

  1. Created an enrich policy named superuser-policy — works fine.
  2. Built an ingest pipeline called superuser-enrich-pipeline that enriches on user.name.
  3. Applied it as a final_pipeline using this index template:
    PUT _index_template/logs-superuser-final-template {"index_patterns": ["logs-elastic*", "logs-endpoint*", "logs-network*", "logs-system*", "logs-windows*", "logs-windows.sysmon*"],"priority": 1500,"template": {"settings": {"index.final_pipeline": "superuser-enrich-pipeline"}},"data_stream": {} }
  4. Rolled over using: POST /logs-windows.sysmon/_rollover

The Issue:
After the rollover and template activation:
• ECS fields like process.command_line or user.name disappear
• Instead I see raw fields like winlog.event_data.CommandLine
• This breaks my enrich processor, since user.name no longer exists to match against
It seems that even though I only used final_pipeline, it impacted the ECS normalization done by the Elastic Agent integration pipeline.

Question:
• What is the proper way to enrich logs (via user.name) after ECS normalization without impacting default Fleet pipelines?
• Does index.final_pipeline unexpectedly override or bypass ECS mapping in Fleet-managed data streams?
• Is there a best practice for safely enriching logs post-processing without touching ECS or default pipelines?

Any help or guidance would be greatly appreciated.
Thanks in advance to the Elastic team and community!

The recommended way is to use any of the @custom pipelines to add any customization that you want.

With ingest pipelines you have different levels of customization, you can apply customizations on the dataset level, on the integration level or globally for all integrations.

In your example, if you check the ingest pipeline used by the sysmon_operational integration you have this at the end.

  {
    "pipeline": {
      "name": "logs@custom",
      "ignore_missing_pipeline": true,
      "description": "[Fleet] Pipeline for all data streams of type `logs`"
    }
  },
  {
    "pipeline": {
      "name": "logs-windows.integration@custom",
      "ignore_missing_pipeline": true,
      "description": "[Fleet] Pipeline for all data streams of type `logs` defined by the `windows` integration"
    }
  },
  {
    "pipeline": {
      "name": "logs-windows.sysmon_operational@custom",
      "ignore_missing_pipeline": true,
      "description": "[Fleet] Pipeline for the `windows.sysmon_operational` dataset"
    }
  }

These are the different levels of customization that you have, so you need to create those ingest pipelines, since they are not created automatically, and then add your processors.

You should never change the index.default_pipeline or index.final_pipeline for Elastic Agent integrations indices, also if you change a template that overrides the managed integration template you may break things.

Mappings customizations needs to be done on the custom component templates, in this case you can customize globally with the logs@custom component template or per dataset, it is not possible to customize on the integration level yet.