Kubernetes, Filebeat, and Kafka connection issues

I am working on getting logs out of Kubernetes using the "stable/filebeat" Helm chart. Everything is running on Ubuntu 16.04 servers.

There seems to be a change between two versions of the filebeat-oss Docker container that is causing an issue when communicating with Kafka (versions are 6.3.2 and 6.4.1).

Here is Filebeat config I am using for the container:

config:
  filebeat.config:
    prospectors:
      # Mounted `filebeat-prospectors` configmap:
      path: ${path.config}/prospectors.d/*.yml
      # Reload prospectors configs as they change:
      reload.enabled: false
    modules:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

  processors:
    - add_cloud_metadata:

  filebeat.prospectors:
    - type: log
      enabled: true
      paths:
        - /var/log/containers/*.log
        - /var/log/syslog
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
        - drop_event:
            when:
              equals:
                kubernetes.container.name: "filebeat"

  output.file:
    enabled: false

  output.kafka:
    hosts: ["<redacted>:9092", "<redacted>:9092", "<redacted>:9092"]
    topic: "logstash"
    required_acks: 1
    compression: gzip
    max_message_bytes: 1000000

  # When a key contains a period, use this format for setting values on the command line:
  # --set config."http\.enabled"=true
  http.enabled: false
  http.port: 5066

The connection succeeds with filebeat-oss version 6.3.2 and the logs are as follows:

2018-10-02T18:08:48.792Z	INFO	instance/beat.go:225	Setup Beat: filebeat; Version: 6.3.2
....snip....
2018-10-02T18:08:53.095Z	INFO	kafka/log.go:36	Connected to broker at <redacted>:9092 (unregistered)

2018-10-02T18:08:53.096Z	INFO	kafka/log.go:36	client/brokers registered new broker #2 at <redacted>:9092
2018-10-02T18:08:53.096Z	INFO	kafka/log.go:36	client/brokers registered new broker #1 at <redacted>:9092
2018-10-02T18:08:53.096Z	INFO	kafka/log.go:36	client/brokers registered new broker #3 at <redacted>:9092
2018-10-02T18:08:53.189Z	INFO	kafka/log.go:36	kafka message: Successfully initialized new client
2018-10-02T18:08:53.190Z	INFO	kafka/log.go:36	producer/broker/1 starting up

2018-10-02T18:08:53.190Z	INFO	kafka/log.go:36	producer/broker/1 state change to [open] on logstash/0

2018-10-02T18:08:53.190Z	INFO	kafka/log.go:36	producer/broker/1 state change to [open] on logstash/2

2018-10-02T18:08:53.190Z	INFO	kafka/log.go:36	producer/broker/2 starting up

2018-10-02T18:08:53.190Z	INFO	kafka/log.go:36	producer/broker/2 state change to [open] on logstash/1

2018-10-02T18:08:53.292Z	INFO	kafka/log.go:36	Connected to broker at <redacted>:9092 (registered as #2)

2018-10-02T18:08:53.390Z	INFO	kafka/log.go:36	Connected to broker at <redacted>:9092 (registered as #1)

The problem occurs when I upgrade to 6.4.1 in order to support reading the CRI containerd log format. The connection to Kafka is closed repeatedly and nothing is ever sent. See below:

2018-10-02T18:15:42.906Z	INFO	instance/beat.go:273	Setup Beat: filebeat; Version: 6.4.1
....snip....
2018-10-02T18:15:47.309Z	INFO	kafka/log.go:53	Connected to broker at <redacted>:9092 (unregistered)

2018-10-02T18:15:47.310Z	INFO	kafka/log.go:53	client/brokers registered new broker #2 at <redacted>:9092
2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	client/brokers registered new broker #1 at <redacted>:9092
2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	client/brokers registered new broker #3 at <redacted>:9092
2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	kafka message: Successfully initialized new client
2018-10-02T18:15:47.311Z	INFO	pipeline/output.go:105	Connection to kafka(<redacted>:9092,<redacted>:9092,<redacted>:9092) established
2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	producer/broker/1 starting up

2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	producer/broker/2 starting up

2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	producer/broker/2 state change to [open] on logstash/1

2018-10-02T18:15:47.311Z	INFO	kafka/log.go:53	producer/broker/1 state change to [open] on logstash/2

2018-10-02T18:15:47.312Z	INFO	kafka/log.go:53	producer/broker/1 state change to [open] on logstash/0

2018-10-02T18:15:47.712Z	INFO	kafka/log.go:53	Connected to broker at <redacted>:9092 (registered as #2)

2018-10-02T18:15:47.715Z	INFO	kafka/log.go:53	producer/broker/2 state change to [closing] because EOF

2018-10-02T18:15:47.715Z	INFO	kafka/log.go:53	Closed connection to broker <redacted>:9092

2018-10-02T18:15:47.715Z	INFO	kafka/log.go:53	producer/leader/logstash/1 state change to [retrying-1]

2018-10-02T18:15:47.715Z	INFO	kafka/log.go:53	producer/leader/logstash/1 abandoning broker 2

2018-10-02T18:15:47.716Z	INFO	kafka/log.go:53	producer/broker/2 shut down

2018-10-02T18:15:47.804Z	INFO	kafka/log.go:53	Connected to broker at <redacted>:9092 (registered as #1)

2018-10-02T18:15:48.004Z	INFO	kafka/log.go:53	client/metadata fetching metadata for [logstash] from broker <redacted>:9092

2018-10-02T18:15:48.006Z	INFO	kafka/log.go:53	producer/broker/2 starting up

2018-10-02T18:15:48.006Z	INFO	kafka/log.go:53	producer/broker/2 state change to [open] on logstash/1

2018-10-02T18:15:48.006Z	INFO	kafka/log.go:53	producer/leader/logstash/1 selected broker 2

2018-10-02T18:15:48.007Z	INFO	kafka/log.go:53	producer/leader/logstash/1 state change to [flushing-1]

2018-10-02T18:15:48.007Z	INFO	kafka/log.go:53	producer/leader/logstash/1 state change to [normal]

2018-10-02T18:15:48.008Z	INFO	kafka/log.go:53	producer/broker/1 state change to [closing] because EOF

2018-10-02T18:15:48.008Z	INFO	kafka/log.go:53	Closed connection to broker <redacted>:9092

2018-10-02T18:15:48.009Z	INFO	kafka/log.go:53	producer/leader/logstash/0 state change to [retrying-1]

2018-10-02T18:15:48.009Z	INFO	kafka/log.go:53	producer/leader/logstash/0 abandoning broker 1

2018-10-02T18:15:48.009Z	INFO	kafka/log.go:53	producer/leader/logstash/2 state change to [retrying-1]

2018-10-02T18:15:48.009Z	INFO	kafka/log.go:53	producer/leader/logstash/2 abandoning broker 1

Does anyone have any idea why the same config is not working with the newer version of Filebeat?

Let me know if you need any more information.

Hey Grant, what version of Kafka are you attempting to connect to? We are experiencing the same issue, with running Filebeat 6.4.0, and 6.4.2. By downgrading to Filebeat 6.3.2 we are able to connect.

We are attempting to connect to a Kafka 0.10 cluster.

We are running Kafka 0.10.0.0

https://www.elastic.co/guide/en/beats/filebeat/current/kafka-output.html#_compatibility_3

According to the documentation, the Kafka version is too old? I assume something changed in the Kafka connector between 6.3.x and 6.4.x.

I will be upgrading Kafka to 0.11.0 in order to test if that resolves the connection issues and will report back with my findings.

I can confirm that upgrading Kafka to 0.11.0 fixes the connection problems with the filebeat 6.4.1 container, so if that is an option for you I would consider it.

With 6.4 the default expected kafka version is 1.0.0. The protocol version used for kafka 0.11 and 1.0 should be similar. You can enforce an older protocol version by configuring the kafka outputs version to 0.10 via:

output.kafka:
  ...
  version: 0.10

The minimal version tested with the current kafka client is 0.11. I'd expect 0.10 still to work, but it's not officially supported or tested anymore.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.