I have following issue and hope you can provide a solution.
We are using elastic stack for monitoring our application.
Priority 1: application must run
Priority 2: monitoring should run to support and monitor the application.
Following issue case:
filebeat is shipping logfiles to redis. When redis becomes unavailable because of some service disruption, or backoff because the redis queue is at it's limit filebeat is keeping the filehandle to the logfile open. All ok until here.
Lets say redis has a big issue and stays down for some days. filebeat keeps open the file handles. ok.
But now the logfile is deleted (not rotated, deleted) to free up disk space on the application server.
Here is the issue. Filebeat keeps the handle open. I can see it with lsof:
to be a bit clearer, I want filebeat to drop the data, if the file is deleted. Yes, I accept the data loss of the logfiles, but the current behavior may disrupt my application if it runs out of diskspace.
Tried to add close_inactive: 1m, but it doesn't really change sth.
We are using redis as output. As long redis is unavailable (port down) filebeat is holding the file handle. If redis is available there is no issue with deleting the files. They are deleted adhoc. Filebeat stops reading in the file and the operating system deletes it as expected.
This file handle holding while redis is down, is this a bug or a configuration issue?
I have not configured any filebeat queues. So it is default.
Looks like, as if the issue "only" exists, if filebeat is opening a file handle / file descriptor for a new file. When redis is unavailable (does not matter if port down or OOM due to memory limit reached) and filebeat is opening a new file, that filebeat will not release the filedescriptor when the file is deleted on operating system level.
If filebeat is pushing logs to redis and I shut down redis or reach redis memory limit filebeat will release the file descriptor and the file will be releted by OS.
What version of Filebeat are you using? Could you give a try to the filestream input? This input is intended to be a replacement of the log input, solving some problems it has.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.