Filebeat keeps file handle on deleted logfile


I have following issue and hope you can provide a solution.
We are using elastic stack for monitoring our application.

Priority 1: application must run
Priority 2: monitoring should run to support and monitor the application.

Following issue case:

filebeat is shipping logfiles to redis. When redis becomes unavailable because of some service disruption, or backoff because the redis queue is at it's limit filebeat is keeping the filehandle to the logfile open. All ok until here.

Lets say redis has a big issue and stays down for some days. filebeat keeps open the file handles. ok.
But now the logfile is deleted (not rotated, deleted) to free up disk space on the application server.

Here is the issue. Filebeat keeps the handle open. I can see it with lsof:

filebeat  27086   filebeat   12u      REG              253,0 325485604     263149 /var/log/myapp/my.log (deleted)

df -h also shows that the disk space have not been freed up.

my filebeat input config looks like this:

- type: log

  enabled: true

    - /var/log/myapp/my.log*

  encoding: windows-1252

    logType: generic-json
    log.format: json
  fields_under_root: true
  max_bytes: 90000000

  #ignore_older: 0
  #scan_frequency: 1s
  close_timeout: 5m
  close_removed: true

thought the last two lines of config would do the trick.

Thanks, Andreas

to be a bit clearer, I want filebeat to drop the data, if the file is deleted. Yes, I accept the data loss of the logfiles, but the current behavior may disrupt my application if it runs out of diskspace.

Tried to add close_inactive: 1m, but it doesn't really change sth.

We are using redis as output. As long redis is unavailable (port down) filebeat is holding the file handle. If redis is available there is no issue with deleting the files. They are deleted adhoc. Filebeat stops reading in the file and the operating system deletes it as expected.

This file handle holding while redis is down, is this a bug or a configuration issue?
I have not configured any filebeat queues. So it is default.

Looks like, as if the issue "only" exists, if filebeat is opening a file handle / file descriptor for a new file. When redis is unavailable (does not matter if port down or OOM due to memory limit reached) and filebeat is opening a new file, that filebeat will not release the filedescriptor when the file is deleted on operating system level.

If filebeat is pushing logs to redis and I shut down redis or reach redis memory limit filebeat will release the file descriptor and the file will be releted by OS.


What version of Filebeat are you using? Could you give a try to the filestream input? This input is intended to be a replacement of the log input, solving some problems it has.

Thanks for your reply.

We are using following version of filebeat:

filebeat version 8.3.2 (amd64), libbeat 8.3.2 [45f722f492dcf1d13698c6cf618b339b1d4907be built 2022-07-06 10:12:50 +0000 UTC]

I will give filestream input a try.

looks like changing from input type log to filestream solved the issue.
Thanks a lot.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.