Filebeat not respecting ingest pipeline in output settings when using syslog inputs

Hi,

I'm having issues using ingest pipelines with Filebeat. Filebeat is supposed to collect any syslog messages and send them to the ingest pipeline "syslog_distributor". However, the incoming documents are not passed to the pipeline. Here's my config:

filebeat.inputs:
- type: syslog
  pipeline: syslog_distributor
  protocol.tcp:
    host: "0.0.0.0:514"
  tags: ["syslog-tcp"]
output.elasticsearch:
  hosts: ["redacted"]
  username: "redacted"
  password: "redacted"
  pipeline: syslog_distributor

I tried the following scenarios:

  • when i define the pipeline at the syslog input level, it works
  • when i define the pipeline at the output while using syslog input, it does not work
  • when i define the pipeline at the output while using normal logs input, it works again.

As per the documentation, it shouldn't matter where the pipeline is defined.

So it seems to be an issue with the pipeline settings and the syslog input? The behaviour is the same with Filebeat 7.8 and 7.9.1.

hi @nemhods, the pipeline can be configured in the input and also in the Elasticsearch output. If the pipeline is configured both in the input and output, the option from the input is used.
When you say it does not work, do you mean that events are still reaching es but it looks like no pipeline parsing has been applied or no events are ingested?
Can you enable debug logging and check for any error messages in the Filebeat logs and the es logs?

Hey,

I'm aware of the two locations for configuring the pipeline, and yes: only configuring the pipeline at the input seems to work. Configuring both works as well, naturally, as then the config at the input is used - like you said.

When I configure the pipeline at the output, the events still reach elasticsearch, but without using the pipeline.

I actually set up a packet capture, and I can see that filebeat does not output the pipeline parameter during the bulk request. Remember, this only happens when I use the syslog input. Not when I just use a file input. So I'm pretty sure it's a Filebeat issue. Filebeat logs show nothing suspicious:

2020-09-10T13:46:32.593Z        INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 0
2020-09-10T13:46:32.593Z        INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 2
2020-09-10T13:46:32.593Z        WARN    [cfgwarn]       syslog/input.go:111     EXPERIMENTAL: Syslog input type is used
2020-09-10T13:46:32.593Z        INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 11171301969492270368)
2020-09-10T13:46:32.594Z        DEBUG   [registrar]     registrar/registrar.go:140      Starting Registrar
2020-09-10T13:46:32.594Z        INFO    [syslog]        syslog/input.go:151     Starting Syslog input   {"protocol": "tcp"}
2020-09-10T13:46:32.594Z        WARN    [cfgwarn]       syslog/input.go:111     EXPERIMENTAL: Syslog input type is used
2020-09-10T13:46:32.594Z        INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 1340960675101911601)
2020-09-10T13:46:32.594Z        INFO    [crawler]       beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 2
2020-09-10T13:46:32.594Z        INFO    [syslog]        syslog/input.go:151     Starting Syslog input   {"protocol": "udp"}
2020-09-10T13:46:32.595Z        INFO    [tcp]   common/listener.go:87   Started listening for TCP connection    {"address": "0.0.0.0:514"}
2020-09-10T13:46:32.595Z        INFO    [tcp]   common/listener.go:127  Started listening for TCP connection    {"address": "0.0.0.0:514"}
2020-09-10T13:46:32.595Z        INFO    [udp]   udp/server.go:81        Started listening for UDP connection    {"address": "0.0.0.0:514"}
2020-09-10T13:46:32.631Z        DEBUG   [processors]    processing/processors.go:187    Publish event: {
EVENT REDACTED
}
2020-09-10T13:46:33.631Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2020-09-10T13:46:33.631Z        INFO    [publisher]     pipeline/retry.go:223     done
2020-09-10T13:46:33.631Z        INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(https://elasticsearch.redacted:9200))
2020-09-10T13:46:33.631Z        DEBUG   [esclientleg]   eslegclient/connection.go:290   ES Ping(url=https://elasticsearch.redacted:9200)
2020-09-10T13:46:33.690Z        DEBUG   [esclientleg]   eslegclient/connection.go:313   Ping status code: 200
2020-09-10T13:46:33.690Z        INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.9.0
2020-09-10T13:46:33.690Z        DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://elasticsearch.redacted:9200/_license?human=false  <nil>
2020-09-10T13:46:33.706Z        DEBUG   [license]       licenser/check.go:31    Checking that license covers %sBasic
2020-09-10T13:46:33.706Z        INFO    [license]       licenser/es_callback.go:51      Elasticsearch license: Basic
2020-09-10T13:46:33.707Z        DEBUG   [esclientleg]   eslegclient/connection.go:290   ES Ping(url=https://elasticsearch.redacted:9200)
2020-09-10T13:46:33.707Z        DEBUG   [esclientleg]   eslegclient/connection.go:313   Ping status code: 200
2020-09-10T13:46:33.707Z        INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.9.0
2020-09-10T13:46:33.707Z        DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://elasticsearch.redacted:9200/_xpack  <nil>
2020-09-10T13:46:33.724Z        INFO    [index-management]      idxmgmt/std.go:261      Auto ILM enable success.
2020-09-10T13:46:33.724Z        DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://elasticsearch.redacted:9200/_ilm/policy/ct_default  <nil>
2020-09-10T13:46:33.724Z        INFO    [index-management.ilm]  ilm/std.go:139  do not generate ilm policy: exists=true, overwrite=false
2020-09-10T13:46:33.724Z        INFO    [index-management]      idxmgmt/std.go:274      ILM policy successfully loaded.
2020-09-10T13:46:33.724Z        INFO    [index-management]      idxmgmt/std.go:407      Set setup.template.name to '{filebeat-7.9.1 {now/d}-000001}' as ILM is enabled.
2020-09-10T13:46:33.725Z        INFO    [index-management]      idxmgmt/std.go:412      Set setup.template.pattern to 'filebeat-7.9.1-*' as ILM is enabled.
2020-09-10T13:46:33.725Z        INFO    [index-management]      idxmgmt/std.go:446      Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.9.1 {now/d}-000001} as ILM is enabled.
2020-09-10T13:46:33.725Z        INFO    [index-management]      idxmgmt/std.go:450      Set settings.index.lifecycle.name in template to {default {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2020-09-10T13:46:33.725Z        DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://elasticsearch.redacted:9200/_cat/templates/filebeat-7.9.1  <nil>
2020-09-10T13:46:33.726Z        INFO    template/load.go:89     Template filebeat-7.9.1 already exists and will not be overwritten.
2020-09-10T13:46:33.726Z        INFO    [index-management]      idxmgmt/std.go:298      Loaded index template.
2020-09-10T13:46:33.726Z        DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://elasticsearch.redacted:9200/_alias/filebeat-7.9.1  <nil>
2020-09-10T13:46:33.726Z        INFO    [index-management]      idxmgmt/std.go:309      Write alias successfully generated.
2020-09-10T13:46:33.726Z        DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://elasticsearch.redacted:9200/  <nil>
2020-09-10T13:46:33.727Z        INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(https://elasticsearch.redacted:9200)) established
2020-09-10T13:46:33.734Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 3 events have been published to elasticsearch in 7.143985ms.
2020-09-10T13:46:33.734Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [0: 0, 3]
2020-09-10T13:46:33.734Z        DEBUG   [publisher]     memqueue/eventloop.go:535       broker ACK events: count=3, start-seq=1, end-seq=3

2020-09-10T13:46:33.734Z        DEBUG   [acker] beater/acker.go:64      stateless ack   {"count": 3}
2020-09-10T13:46:33.734Z        DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:3
2020-09-10T13:46:33.734Z        DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack

Hi,
I had a somehow similar problem.
I hope it helps.