I'm using Filebeat 8.14.3 to collect lots of different files and the close.reader.on_eof doesn't seem to be working. The file only needs to be read once and then closed. If you need more of the filebeat.yml, please let me know.
I am sure its something I did, but I can't figure it out.
Here is a little background. Basically I am getting a tar that I extract into a directory that Filebeat container(Docker) has access to and is monitoring those directories. I have other filestreams in the same filebeat.yml monitoring it as well.
When the tar is extracted I see the memory for filebeat go from MB to GB and it doesn't go down. Eventually the docker service will crash because too many files are open. As a test I would delete files that were extracted to see if Filebeats memory goes down and it did.
I can run Filebeat in debug to find better proof, but I don't know what selectors to choose. Would it be harvester?
Do I have close.reader.on_eof: true on the correct level in the yaml?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.