Filebeat keeps open files forever

Hi,

I have a similar issue like this and I have upgraded to latest version 5.0 to see if the issue goes away but I still see the same issue.

below is the configuration and few debug lines where it mentions the file is

-
  paths:
   - /apps/opt/logs/*/*/*.log
  exclude_lines: ["^DEBUG"]
  input_type: log
  ignore_older: 1m
  close_inactive: 10s
  clean_removed: true
  clean_inactive: 10s
  close_removed: true
  close_renamed: true
  #force_close_files: true
  scan_frequency: 5s

2016/11/08 23:49:43.090250 prospector_log.go:269: DBG File rename was detected: /apps/opt/logs/application.log -> /apps/opt/logs/application.23_08Nov2016.34.log, Current offset: 21675185
2016/11/08 23:49:43.090275 prospector_log.go:282: DBG File rename detected but harvester not finished yet.
2016/11/08 23:49:43.090304 prospector_log.go:288: DBG Harvester for file is still running: /apps/opt/logs/application.23_08Nov2016.34.log

any help is appreciated.

thx
srinivas

It looks like the indentation of your config file is off a little bit off for close_removed and the options below. Can you correct this? force_close_files is not available anymore in 5.0.

Are the files kept open "forever" or are they closed after close_inactive? How often do you update the files?

Hi Ruflin,

I am using filebeat 5.0, filebeat stills holds the deleted logs. I added in my file
close_inactive: 5m

Hi Ruflin, the files are open forever. The files get updated very often filling 750MB in less than 20 minutes or less.

Can you guys share some log files in a gist? It is possible that the output did not catch up with reading?

we have IOT process logs which are shared using NFS mounts from four vms. do you need any other information ?

It would be nice to see some full log files from filebeat. You can share them in a gist. Be aware that in general it is not recommended to fetch log files from mounted volumes but have filebeat installed on all edge nodes.

Hi Ruflin, apologize for late reply since I moved onto other stuff and with holidays. But I have to look into this one more final time.

I think I see whats going on. Our process actually writes the log files and it rotates the log file once it reaches say 750MB in less than 10 minutes .

then the filebeat reads the logs. I am using lsof -p on the filebeat and I notice that filebeat still holds a reference to the deleted file even though it doesn't exist and never closes .

I am not sure if filebeat is done reading the file completely ( I am assuming its not). I am using close_removed and clean_removed attributes but still it doesn't look like filebeat is releasing the files ever.

I am not sure how to debug this further. Please let me know if there is anything else I can do to avoid this situation.

thx
sri

Best is to have a look at the log lines on what it is stating there. If the harvester is still open / catching up, it explains why the files are still open. In case the output is not blocked, close_removed should still apply as soon as the event is sent. But here it could be that the network drive comes into play and filebeat gets some cached data instead of being notified that it is removed. Note: I don't know the details of NFS mount implementation.

Please have a look at the log files and let me know what you see there. Best with debug level then you should see what is happening (or not).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.