I am working on getting logs out of Kubernetes using the "stable/filebeat" Helm chart. Everything is running on Ubuntu 16.04 servers.
There seems to be a change between two versions of the filebeat-oss Docker container that is causing an issue when communicating with Kafka (versions are 6.3.2 and 6.4.1).
Here is Filebeat config I am using for the container:
config:
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/containers/*.log
- /var/log/syslog
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
- drop_event:
when:
equals:
kubernetes.container.name: "filebeat"
output.file:
enabled: false
output.kafka:
hosts: ["<redacted>:9092", "<redacted>:9092", "<redacted>:9092"]
topic: "logstash"
required_acks: 1
compression: gzip
max_message_bytes: 1000000
# When a key contains a period, use this format for setting values on the command line:
# --set config."http\.enabled"=true
http.enabled: false
http.port: 5066
The connection succeeds with filebeat-oss version 6.3.2 and the logs are as follows:
2018-10-02T18:08:48.792Z INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.2
....snip....
2018-10-02T18:08:53.095Z INFO kafka/log.go:36 Connected to broker at <redacted>:9092 (unregistered)
2018-10-02T18:08:53.096Z INFO kafka/log.go:36 client/brokers registered new broker #2 at <redacted>:9092
2018-10-02T18:08:53.096Z INFO kafka/log.go:36 client/brokers registered new broker #1 at <redacted>:9092
2018-10-02T18:08:53.096Z INFO kafka/log.go:36 client/brokers registered new broker #3 at <redacted>:9092
2018-10-02T18:08:53.189Z INFO kafka/log.go:36 kafka message: Successfully initialized new client
2018-10-02T18:08:53.190Z INFO kafka/log.go:36 producer/broker/1 starting up
2018-10-02T18:08:53.190Z INFO kafka/log.go:36 producer/broker/1 state change to [open] on logstash/0
2018-10-02T18:08:53.190Z INFO kafka/log.go:36 producer/broker/1 state change to [open] on logstash/2
2018-10-02T18:08:53.190Z INFO kafka/log.go:36 producer/broker/2 starting up
2018-10-02T18:08:53.190Z INFO kafka/log.go:36 producer/broker/2 state change to [open] on logstash/1
2018-10-02T18:08:53.292Z INFO kafka/log.go:36 Connected to broker at <redacted>:9092 (registered as #2)
2018-10-02T18:08:53.390Z INFO kafka/log.go:36 Connected to broker at <redacted>:9092 (registered as #1)
The problem occurs when I upgrade to 6.4.1 in order to support reading the CRI containerd log format. The connection to Kafka is closed repeatedly and nothing is ever sent. See below:
2018-10-02T18:15:42.906Z INFO instance/beat.go:273 Setup Beat: filebeat; Version: 6.4.1
....snip....
2018-10-02T18:15:47.309Z INFO kafka/log.go:53 Connected to broker at <redacted>:9092 (unregistered)
2018-10-02T18:15:47.310Z INFO kafka/log.go:53 client/brokers registered new broker #2 at <redacted>:9092
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 client/brokers registered new broker #1 at <redacted>:9092
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 client/brokers registered new broker #3 at <redacted>:9092
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 kafka message: Successfully initialized new client
2018-10-02T18:15:47.311Z INFO pipeline/output.go:105 Connection to kafka(<redacted>:9092,<redacted>:9092,<redacted>:9092) established
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 producer/broker/1 starting up
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 producer/broker/2 starting up
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 producer/broker/2 state change to [open] on logstash/1
2018-10-02T18:15:47.311Z INFO kafka/log.go:53 producer/broker/1 state change to [open] on logstash/2
2018-10-02T18:15:47.312Z INFO kafka/log.go:53 producer/broker/1 state change to [open] on logstash/0
2018-10-02T18:15:47.712Z INFO kafka/log.go:53 Connected to broker at <redacted>:9092 (registered as #2)
2018-10-02T18:15:47.715Z INFO kafka/log.go:53 producer/broker/2 state change to [closing] because EOF
2018-10-02T18:15:47.715Z INFO kafka/log.go:53 Closed connection to broker <redacted>:9092
2018-10-02T18:15:47.715Z INFO kafka/log.go:53 producer/leader/logstash/1 state change to [retrying-1]
2018-10-02T18:15:47.715Z INFO kafka/log.go:53 producer/leader/logstash/1 abandoning broker 2
2018-10-02T18:15:47.716Z INFO kafka/log.go:53 producer/broker/2 shut down
2018-10-02T18:15:47.804Z INFO kafka/log.go:53 Connected to broker at <redacted>:9092 (registered as #1)
2018-10-02T18:15:48.004Z INFO kafka/log.go:53 client/metadata fetching metadata for [logstash] from broker <redacted>:9092
2018-10-02T18:15:48.006Z INFO kafka/log.go:53 producer/broker/2 starting up
2018-10-02T18:15:48.006Z INFO kafka/log.go:53 producer/broker/2 state change to [open] on logstash/1
2018-10-02T18:15:48.006Z INFO kafka/log.go:53 producer/leader/logstash/1 selected broker 2
2018-10-02T18:15:48.007Z INFO kafka/log.go:53 producer/leader/logstash/1 state change to [flushing-1]
2018-10-02T18:15:48.007Z INFO kafka/log.go:53 producer/leader/logstash/1 state change to [normal]
2018-10-02T18:15:48.008Z INFO kafka/log.go:53 producer/broker/1 state change to [closing] because EOF
2018-10-02T18:15:48.008Z INFO kafka/log.go:53 Closed connection to broker <redacted>:9092
2018-10-02T18:15:48.009Z INFO kafka/log.go:53 producer/leader/logstash/0 state change to [retrying-1]
2018-10-02T18:15:48.009Z INFO kafka/log.go:53 producer/leader/logstash/0 abandoning broker 1
2018-10-02T18:15:48.009Z INFO kafka/log.go:53 producer/leader/logstash/2 state change to [retrying-1]
2018-10-02T18:15:48.009Z INFO kafka/log.go:53 producer/leader/logstash/2 abandoning broker 1
Does anyone have any idea why the same config is not working with the newer version of Filebeat?
Let me know if you need any more information.