Filebeat pipeline setup for system module ignores var.convert_timezone: true?

(Pete) #1

I'm trying to set up the filebeat system module to send logs to elastic through logstash. I'm following the filebeat "getting started" docs but the ingest pipeline that is created by the filebeat setup command is missing the property "timezone" : "{{ event.timezone }}", and the documents are not being indexed as expected. Looking at the template it looks like it checks the var.convert_timezone setting which I have set to true in system.yml. These are the steps I'm following (from the guide):

  • edit filebeat.yml to output to logstash instead of elasticsearch

  • edit system.yml

    syslog:
    enabled: true
    var.convert_timezone: true

    auth:
    enabled: true
    var.convert_timezone: true

  • restart filebeat

  • filebeat modules enable system

  • filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'

  • filebeat setup --pipelines --modules system -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200']

the ingest pipeline created in elasticsearch does not have "timezone" : "{{ event.timezone }}"

"filebeat-7.0.1-system-syslog-pipeline" : {
    "description" : "Pipeline for parsing Syslog messages.",
    "processors" : [
    {
        "grok" : {
        "field" : "message",
        "patterns" : [
            """%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}""",
            "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}",
            """%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}"""
        ],
        "pattern_definitions" : {
            "GREEDYMULTILINE" : "(.|\n)*"
        },
        "ignore_missing" : true
        }
    },
    {
        "remove" : {
        "field" : "message"
        }
    },
    {
        "rename" : {
        "field" : "system.syslog.message",
        "target_field" : "message",
        "ignore_missing" : true
        }
    },
    {
        "date" : {
        "field" : "system.syslog.timestamp",
        "target_field" : "@timestamp",
        "formats" : [
            "MMM  d HH:mm:ss",
            "MMM dd HH:mm:ss",
            "yyyy-MM-dd'T'HH:mm:ss.SSSSSSZZ"
        ],
        "ignore_failure" : true
        }
    },
    {
        "remove" : {
        "field" : "system.syslog.timestamp"
        }
    }
    ],
    "on_failure" : [
    {
        "set" : {
        "field" : "error.message",
        "value" : "{{ _ingest.on_failure_message }}"
        }
    }
    ]
}

I managed to get this working previously using the filebeat setup commands, but I can not figure out the correct order or something to recreate it. Any ideas what I am missing?

(Pete) #2

So it looks like the filebeat setup command is loading filebeat.yml but ignoring the module configs when setting up the pipeline.

If you configure var.convert_timezone=true in filebeat.yaml,

or add it to the setup command like this filebeat setup --pipelines --modules system -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -M system.auth.var.convert_timezone=true -M system.syslog.var.convert_timezone=true

the ingest pipeline is created as expected

(Adrian Serrano) #3

I tested and that seems to be the case. I will contact the Beats team to see if this is a bug or expected behavior.

When dealing with modules, the pipeline is loaded automatically if it doesn't exist already at the time the module publishes its first document. So it's not necessary to use setup --pipelines.

This also means that if you change a setting that affects the pipeline, you need to manually delete the pipeline before running filebeat.