- Filebeat version: 6.1.2
- Kubernetes nodes are running on Google's GKE
- Kubernetes version: 1.8.5-gke.0
Context
- We are following the instructions in https://www.elastic.co/guide/en/beats/filebeat/6.1/running-on-kubernetes.html
- We are using almost the same Kubernetes manifest file https://raw.githubusercontent.com/elastic/beats/6.1/deploy/kubernetes/filebeat-kubernetes.yaml
The difference is just the output, instead of ElasticSearch, we are using Kafka, creating a new topic for each app
label on Kubernetes, the diff is:
- output.kafka:
- hosts: ["brokers.kafka.svc.cluster.local:9092"]
- topic: '%{[kubernetes.labels.app]}'
+ processors:
+ - add_cloud_metadata:
+
+ cloud.id: ${ELASTIC_CLOUD_ID}
+ cloud.auth: ${ELASTIC_CLOUD_AUTH}
+
+ output.elasticsearch:
+ hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
+ username: ${ELASTICSEARCH_USERNAME}
+ password: ${ELASTICSEARCH_PASSWORD}
The Problem
New pods' logs are not being picked up by Filebeat. They are only picked up if I delete the Filebeat pods, once they get recreated by the DaemonSet conditions, new logs are picked up.
As an example, if we create a sample Deployment in Kubernetes with 10 replica pods, writing messages to stdout:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: log-test
labels:
app: my-custom-log
We should be seeing a new topic in Kafka by the name of log-test
, however the logs for those new pods are not picked up, so no topic is created:
$ kubectl exec kclient -n kafka -- /usr/bin/kafka-topics --zookeeper zookeeper:2181 --list | grep my-custom-log
$
However, once I shutdown one of the filebeat nodes:
$ kubectl delete pod -n kube-system filebeat-sf9kz
pod "filebeat-sf9kz" deleted
It gets recreated:
filebeat-sf9kz 1/1 Terminating 0 4m
filebeat-sf9kz 0/1 Terminating 0 4m
filebeat-twlgc 0/1 Pending 0 0s
filebeat-twlgc 0/1 ContainerCreating 0 0s
filebeat-twlgc 1/1 Running 0 1s
And now the topic is there as expected:
$ kubectl exec kclient -n kafka -- /usr/bin/kafka-topics --zookeeper zookeeper:2181 --list | grep my-custom-log
my-custom-log
Is there anything we can do to fix this situation?
Thanks!