Filebeat with high memory consumption after logrotate

Hello!

We use Filebeat (in Docker, version 7.8) to send logs to Logstash. Every night at 0:01 logrotate runs on the machine where Filebeat is running. It is rotating ~20-25 logfiles. Logrotate is configured with following options:

daily
missingok
rotate 7
compress
copytruncate
dateext
dateformat .%Y-%m-%d-%s.rotlog

From the moment logrotate starts, Filebeat starts allocating a lot of memory.


Filebeat is reporting, that the file got truncated, sure, that is true.

We've already tried out:

  • mem limit configured to container - this ends up in a restart loop of that container
  • configured queue.spool instead of queue.mem
  • configured close_inactive to 30min (in the night logs are used less)
  • configured client_inactivity_timeout in Logstash (in the night logs are used less)
  • split the log rotation into time frames (cron is starting different jobs between 0:01 and 0:10)

All these attempts have not helped to throttle the memory consumption of Filebeat. After the first rotation, the consumption is very high.

So what's the problem here?

Cheers
Marco

This is an Scrrenshot from this night. What is happening there? Why is Filebeat collecting > 5GB of RAM?

Welcome to our community! :smiley:
Which part of the memory is that actually measuring?

Hey Mark,
this is the container memory usage. I

I will take an educated guess and suggest that this is virtual memory being used by Filebeat juggling the files that it wants to read and has read.

Hopefully someone will pop in and ask a few more specific questions, as I can't really comment on how to reduce it.

I'm just a bit surprised that Filebeat needs up to 6GB of memory after the truncate. It is only a spike, but it would be good to get a grip on this behaviour. I would like to understand what happens internally and also understand why Filebeat uses < 500MB during the day but after the rotation an undefined amount.

I'm currently a bit at a loss as to how we can get a handle on this. This setup is currently not in production, only on a test environment. On production, logs are available around the clock.

Anyone? No ideas?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.