Filebeat memory issues


Here's a (not very) pretty picture - the filebeats in our K8s cluster appear to start up, then over not-very-long increase their memory consumption until they crash with out of memory, then restart.

This is version 6.6.2, with a memory limit of 500M.

I am about to try fiddling with queue.mem.events ... but is there any general guidance about memory usage tuning for filebeat?

Well, setting queue.mem.events to 1024 doesn't seem to have been helpful:

(1) It seems to be struggling to fetch the backlog of data, and keep up with continuing log generation

(2) Filebeat pods are still running out of memory and crashing and restarting.

I've now found https://github.com/elastic/beats/issues/9302#issuecomment-490000600 which didn't exist last time I went through this loop (or, at least, the fix in 6.8.1 didn't exist). Trying it to see what happens ...

Hi @TimWard,

Yes, please, try with 6.8.1 if possible, we backported some fixes for memory leaks related to autodiscover to this version.

The removal of add_kubernetes_metadata made an enormous difference. The move to 6.8.1 may also have helped, but I did both together so am not sure. It's been running fine overnight, for the first time since we've tried running filebeat on Kubernetes.

Now, where did the add_kubernetes_metadata which caused the trouble come from? - I'm not absolutely certain but I think it was in the filebeat-k8s.yaml file which I downloaded from somewhere vaguely semi-official looking as "this is how you do filebeat on Kubernetes".

Here is the "after" memory usage on the same scale: image

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.