Unable to use the replace filter on filebeat.yml

I cannot replace the value of a field using the "replace" processor on filebeat.yml. The service is running but the field returns a null value. See the configuration below:

  • replace:
    - field: "decoded.cef.severity"
    pattern: "8"
    replace: "8/Medium"
    ignore_missing: true
    fail_on_error: true

I guess you mentioned replace again in the 5th line of your code. It should be replacement instead of replace.

That is what I can see which is given in the Elastic Documentation.

List of one or more items. Each item contains a field: field-name, pattern: regex-pattern, and replacement: replacement-string, where:

  • field is the original field name. You can use the @metadata. prefix in this field to replace values in the event metadata instead of event fields.
  • pattern is the regex pattern to match the field’s value
  • replacement is the replacement string to use to update the field’s value

The following example changes the path from /usr/bin to /usr/local/bin:

 - replace:
        - field: "file.path"
          pattern: "/usr/"
          replacement: "/usr/local/"
      ignore_missing: false
      fail_on_error: true

Greetings, I made the changes as stated on the previous Reply. I got this message:

"Failed to replace fields in processor: could not fetch value for key: decoded.cef.severity, Error: key not found"

Can you send the whole filebeat configuration file to get a better idea?

============================== Filebeat inputs ===============================


  • type: log
    enabled: true
    • "C:/Users/admin/Desktop/Syslog-Watcher-Cortex/*.txt"

Paths that should be crawled and fetched. Glob based paths.

scan_frequency: 20s

============================== Filebeat modules ==============================


Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: true

Period on which files under path should be checked for changes

reload.period: 30s

======================= Elasticsearch template setting =======================

index.number_of_shards: 1

================================== General ===================================

name: XXXX_CortexProbe2.0
tags: ["XXXX_CortexEvents2", "Cortex", "forwarded"]

=================================== Kibana ===================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.


Kibana Host

================================== Outputs ===================================

Configure what output to use when sending the data collected by the beat.

---------------------------- Elasticsearch Output ----------------------------

hosts: ["XXXX:9200"]
protocol: "https"
username: "XXXX"
password: XXXX

================================= Processors =================================


  • rename:
  • {from: "message", to: "event.original"}
  • decode_cef:
    field: event.original
    target_field: decoded.cef
    ignore_missing: true
    ignore_failure: true
  • timestamp:
    field: decoded.cef.extensions.endTime
  • '2006-01-02T15:04:05Z'
  • '2022-12-13T16:52:26.510079Z'
  • drop_fields:
    fields: [decoded.cef.extensions.endTime]
  • replace:
  • field: "decoded.cef.severity"
    pattern: "8"
    replacement: "8/Medium"
    ignore_missing: false
    fail_on_error: true

============================= X-Pack Monitoring ==============================

monitoring.enabled: true

================================== Logging ===================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

#logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publisher", "service".

#logging.selectors: ["*"]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.