When running Filebeat on a Kubernetes node, the system is spending a large amount of cpu cycles on iowait. According to according to iotop, the system during this time is writing 10 MB/S. Reads account for upwards of 150 MB/S. The Filebeat container gradually uses more memory in conjunction with increasing the io load until the memory limit we setup on the pod is reached and the pod gets restarted. There are no logs in the Filebeat container that indicate that the pod is getting into an error state or is failing to send logs to Kafka. Is there a setting we are overlooking that is causing this? Here is what we have for the two input types used by the Filebeat container. The node is using the Docker json logging driver and is using the default file size and count.
- type: docker
combine_partial: true
cri.parse_flags: true
close_inactive: 48h
containers.ids:
- "*"
exclude_lines:
- 1
.
.
- 8
- type: log
paths:
- "path 1"
- "path 2"
- "path 3"
exclude_lines:
- 1
.
.
- 5
fields:
log_topic: 'log_topic_name'
fields_under_root: true
scan_frequency: 1s