Hi all.
We use filebeat to collect events from pods in k8s cluster. To do this we run it as daemonset
and mounts /var/lib/docker/containers
, /var/log/pods
and /var/log/containers
from cluster nodes into filebeat pods.
The configuration file for filebeat:
- type: container
paths:
- /var/log/containers/*.log
close_inactive: 10m
close_removed: true
ignore_older: 12m
clean_inactive: 15m
clean_removed: true
multiline.pattern: '^(\{s{0,1}\")'.
multiline.negate: true
multiline.match: after
multiline.timeout: 10s
harvester_buffer_size: 65536
scan_frequency: 1s
partial: true
processors:
- add_kubernetes_metadata:
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- decode_json_fields:
fields: ['message']
target: ''
max_depth: 3
add_error_key: true
Docker settings:
{
"log-driver": { "json-file",
{ "log-opts": {
{ "max-file": "5",
{ "max-size": "10m"
},
{ "max-concurrent-downloads": 3
}
We have the following issues:
Applications pods produce events that can be either a single long line (up to 50Kb) or multiline (up to 100Kb total). Long lines are cut by docker at 16Kb and 10% of the time filebeat does not merge these lines into a single event.
Also multiline events will not merge if parts of the entries are in different logs (podname.log and podname.log.1 due to rotation)
Is it a bug or misconfigured?
filebeat - docker.elastic.co/beats/filebeat-oss:7.9.3
docker - v19.03.13
kubernetes - v1.19.8