Filebeat holding old files in Centos 6.5

Hi not sure it bug but filebeat is killing space on filesystem one I do restart filebeat system get back to normal
can some one please why filebeat stuck to old files I'm running 5.5.2 version ?

Filebeat is supposed to tail log files. As content can be appended to log files at any time, filebeat keeps the files open for some time, checking for new contents. As log files can be removed/rotated-out, closing the file early can lead to data-loss. That's why the file is kept open.

Have a look at at the different file closing settings in the docs

On all OSes a file being deleted is only deleted for sure, once no more process is accessing the file.

1 Like

Hi Steffens,

Thanks for your suggestion I have made the filebeat yml below configuration looks my issue is still on going on

root@hostname ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 130G 130G 0 100% /
tmpfs 95G 5.7M 95G 1% /dev/shm
/dev/sda1 485M 32M 428M 7% /boot

after filebeat restart

[root@hostname /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 130G 106G 18G 86% /
tmpfs 95G 5.7M 95G 1% /dev/shm
/dev/sda1 485M 32M 428M 7% /boot (edited)

filebeat yml configuration

  • input_type: log
    paths:

  • /opt/trafficserver/var/log/trafficserver/traffic.out
    document_type: traffic
    close_renamed : true
    close_inactive : 10m

  • input_type: log
    paths:

    • /opt/trafficserver/var/log/trafficserver/custom_ats_2.log
      document_type: custom_ats_2
      close_renamed : true
      close_inactive : 10m

Thanks in advance Ravi

Are these files written once or is more contents append at any time?

How big are those files?

How fast are they rotated?

Maybe 10m is still to big? You can also try close_eof to force filebeat to close the file handle every time the end of file is reached. If more contents is added to the file, filebeat will reopen and continue processing.

1 Like

Thanks for your advice Steffens I have made the change this morning will monitor till next 3 days and see if the issue still exists and also my I'm harvesting from 14 sources with individual paths each file is about 30m to 300m which is really huge I know..

Sample configuration now is

  • input_type: log
    paths:

    • /opt/trafficserver/var/log/trafficserver/traffic.out
      document_type: traffic
      close_renamed : true
      close_inactive : 10m
      close_eof: true
  • input_type: log
    paths:

    • /opt/trafficserver/var/log/trafficserver/custom_ats_2.log
      document_type: custom_ats_2
      close_renamed : true
      close_inactive : 10m
      close_eof: true

Thanks / Ravi

This topic was automatically closed after 21 days. New replies are no longer allowed.