Filebeat process in the server is holding the deleted files which are forwarded by logstash forwarder which makes the server disk space full. Currently a cron job is in place to restart filebeat at regular intervals. But team demands a permanent solution for this. Filebeat version we used is below.
filebeat version 1.2.3 (386)
@arya For a more permanent solution, I recommend upgrading to Filebeat 5.4 (it's compatible with ES 2.x) and use the close_timeout option. Set it to a value higher than your rotation interval.
Also, consider that if Filebeat is holding onto files for so long, it means that the output is often blocked. Perhaps your Logstash/Elasticsearch clusters are under-provisioned?
So version 1.2.3 doesn't support close_timeout option?
This problem is there only for files which are deleted. All the other files getting forwarded successfully. So that means Logstash/Elasticsearch clusters working as expected right?
If close_timeout is not set, a blocking output can cause Filebeat to keep the files open forever, which means the OS can't delete them. Does that answer the question?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.