Filebeat still process old log file

Hello guys,
I have a problem and just found the root cause yesterday.
So, there were Filebeat and Logrotate in an instance.
When Logrotate executed, Filebeat still read the old file. Even though the file is gone.
This makes the disk size consumed hiddenly. How do i know?
I did this, lsof +L1 and then there was filebeat which was processing the old log file. After that, i need to kill PID or service filebeat stop and then service filebeat start. After this action, the disk size is normal back.
Have you ever experienced this problem ?
Do you have any idea to solve this without manual action ?
Thank you for your time and help.

Hello @merceskoba, To make sure all events are consumed, filebeat will by design keep file open until it reach EOF and the close_inactive value is triggered normally it's should be 5 minutes after Filebeat has completely read the file. All the of theses events are written to the log.

Are you sure that Filebeat did consume all the events from the file?

Can you share your harvester/input configuration?

We can get a bit more log information if you start Filebeat using the "harvester" selector either by configuring it in the YAML or by starting filebeat with -d "harvester".

https://paste.ubuntu.com/p/2nw5m3TYGc/

That's my config. And i just noticed this link
https://www.elastic.co/guide/en/beats/filebeat/6.2/configuration-filebeat-options.html#close-inactive
https://www.elastic.co/guide/en/beats/filebeat/6.2/configuration-filebeat-options.html#close-renamed

Should i add close_inactive or close_renamed ?
If abc.log is rotated by logrotate and Filebeat is reading abc.log. Now, abc.log will be renamed to be abc.log.1 right ?
is Filebeat still reading abc.log.1 ?
I think 'yes', that's why lsof +L1 detected Filebeat processed the previous log. But, it was stucked like Filebeat had never finished to read it.

And my idea is add close_renamed if abc.log is renamed to abc.log.1, Filebeat will not read it but there is potential data loss caused the Filebeat doesn't read the log until EOF.

close_inactive is already set to 5mins by default, so I would not change it and it should trigger closing the file when no events are read after 5mins. I would not introduce close_renamed because of the risk of losing events.

If abc.log is rotated by logrotate and Filebeat is reading abc.log. Now, abc.log will be renamed to be abc.log.1 right ?
is Filebeat still reading abc.log.1 ?
I think 'yes', that's why lsof +L1 detected Filebeat processed the previous log. But, it was stucked like Filebeat had never finished to read it.

Yes this is exactly what is happening and it's by design, so you don't lose events event.

Usually when Filebeat keep FD open its because it didn't read the file completely and send the events to Logstash, this mean that you produce more events that your Logstash instance (or anything after) can ingest.

Before using close_renamed and losing events I would encourage you to look at your logstash log to see if there is any errors and try to increase the ingestion capacity.

got it, buddy.... thank you for enlightening me :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.