Hello,
I am using filebeat 7.9.0.
I have a k8s cluster with multiple node pools using containerd and docker runtime.
I want my logs from these two runtime correctly parsed and with kubernetes metadata.
I am using this config file :
filebeatConfig:
filebeat.yml: |
filebeat:
modules:
- module: nginx
access:
enabled: true
error:
enabled: true
autodiscover:
providers:
- type: kubernetes
labels.dedot: true
annotations.dedot: true
templates:
- condition:
equals:
kubernetes.labels.type: java
config:
- type: container
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
- /var/log/pods/${data.kubernetes.namespace}_${data.kubernetes.pod.name}_${data.kubernetes.pod.uid}/${data.kubernetes.container.name}/*.log
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} '
multiline.negate: true
multiline.match: after
ignore_older: 48h
- condition:
contains:
kubernetes.labels.app: nginx
config:
- module: nginx
access:
input:
type: container
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
- /var/log/pods/${data.kubernetes.namespace}_${data.kubernetes.pod.name}_${data.kubernetes.pod.uid}/${data.kubernetes.container.name}/*.log
stream: stdout
ignore_older: 48h
error:
input:
type: container
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
- /var/log/pods/${data.kubernetes.namespace}_${data.kubernetes.pod.name}_${data.kubernetes.pod.uid}/${data.kubernetes.container.name}/*.log
stream: stderr
ignore_older: 48h
- condition:
and:
- has_fields: ['kubernetes.container.id']
- not.contains:
kubernetes.labels.app: nginx
- not.equals:
kubernetes.labels.type: java
config:
- type: container
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
- /var/log/pods/${data.kubernetes.namespace}_${data.kubernetes.pod.name}_${data.kubernetes.pod.uid}/${data.kubernetes.container.name}/*.log
ignore_older: 48h
setup:
ilm:
enabled: false
template:
name: "filebeat-%{[agent.version]}"
pattern: "*-filebeat-%{[agent.version]}-*"
settings:
index:
number_of_replicas: 0
processors:
- add_kubernetes_metadata:
labels.dedot: true
annotations.dedot: true
- drop_event:
when:
equals:
kubernetes.container.name: filebeat
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '${ELASTICSEARCH_HOSTS:es-master:9200}'
index: "%{[kubernetes.namespace]:nonamespace}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
pipelines:
- pipeline: java-logs-pipeline
when.equals:
kubernetes.labels.type: java
- pipeline: mongodb-logs-pipeline
when.equals:
kubernetes.labels.app: mongo-pod
In my container inputs, I try to add two paths : /var/lib/docker/containers and /var/log/pods but filebeat logs a lot of errors (millions by hour) :
2021-06-18T10:50:35.669Z ERROR [kubernetes] add_kubernetes_metadata/matchers.go:91 Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.
How to handle this use case ? I tried to modify the processor configuration as following but it is not working either :
processors:
- add_kubernetes_metadata:
labels.dedot: true
annotations.dedot: true
matchers:
- logs_path:
logs_path: "/var/lib/docker/containers/"
- logs_path:
logs_path: "/var/log/pods/"
Thanks a lot