Here's a (not very) pretty picture - the filebeats in our K8s cluster appear to start up, then over not-very-long increase their memory consumption until they crash with out of memory, then restart.
This is version 6.6.2, with a memory limit of 500M.
I am about to try fiddling with queue.mem.events ... but is there any general guidance about memory usage tuning for filebeat?
The removal of add_kubernetes_metadata made an enormous difference. The move to 6.8.1 may also have helped, but I did both together so am not sure. It's been running fine overnight, for the first time since we've tried running filebeat on Kubernetes.
Now, where did the add_kubernetes_metadata which caused the trouble come from? - I'm not absolutely certain but I think it was in the filebeat-k8s.yaml file which I downloaded from somewhere vaguely semi-official looking as "this is how you do filebeat on Kubernetes".
Here is the "after" memory usage on the same scale:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.