Hi,
Runing Filebeat 8.9.0
I'm using the Kubernetes metadata processor to split the logs in 2 separate kafka topics. Basically all my deployments I add a bit of metadata to indicate it belongs to my organization so those logs go into one kafka topic and every other pod logs go to the other topic.
The issue is sometimes it doesn't detect the metadata, it seems random, even more weirdly it randomly happens with scaled pods within the same deployment. So what happens my app/organization logs end up in the "other" topic, but mostly into the proper topic.
So is this some sort of performance issue?
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- if:
equals:
kubernetes.labels.log-group: my-org
then:
add_fields:
fields:
log-topic: my-org
else:
add_fields:
fields:
log-topic: kube-other
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["...", "...", "..."]
# message topic selection + partitioning
topic: '%{[fields.log-topic]}'
partition.round_robin:
reachable_only: false
required_acks: -1