Filebeat memory usage

I've seen a number of threads on this but none that seem to give any diagnosis or recommendations.

We are running Filebeat (various 6.x versions) in Kubernetes, and it seems that whatever memory limit we set the pods eventually get OOM killed and start again. This isn't a significant problem from the point of view of log ingestion - some log data arrives a minute or three late - but it is a problem for our Ops people who don't like to see "pod killed because out of memory" alerts all the time.

Is there any way of

  • finding out what is going on
  • finding out how much memory Filebeat should be using so we can get the allocation right in K8s
  • configuring Filebeat so as not to use an ever-increasing amount of memory

?

Hello @TimWard

  1. I think we should concentrate in finding what is going on, what version of Filebeat are you running? We appear to have a goroutine/memory leak in 6.5.

  2. by default, filebeat will try to create as much harvesters as possible, one for each discovered files, on way to reduce the memory usage is to cap the creation of harvester using the Log input | Filebeat Reference [6.5] | Elastic

6.2.4

I don't have any evidence that it's definitely a memory leak. It may well be "simply" a question of better understanding the memory needs and configuration options, but at present I don't know where to start.

I've tried increasing the memory from whatever I started with (don't remember) to 1G, but even at 1G it regularly runs out and dies, and I don't think the people who pay our cloud bill would be too happy if I just kept on and on increasing it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.