I am trying to understand how filebeat reacts to files that are dumped at once every x hours (versus log files that are fed on the fly, line by line)
For instance, I have a file /log/audit.dump that is recreated from scracth by a script every day at midnight with new content. This is a simple csv file with a few thousands lines.
I have witnessed several behaviors from filebeat :
- The new generated file is completely read and the new lines are sent to logstash (expected behavior)
- The file starts at the current offset and the first line sent to logstash is truncated
- The file is not read at all as if filebeat had not detected the file change.
Here is my filebeat prospector conf :
filebeat: prospectors: - paths: - /log/audit.dump input_type: log document_type: audit exclude_lines: ["^ACTION"]
Could you please shed some light on how one is supposed to approach this kind of situation ?
Am I missing something in the way I am configuring filebeat ?
Thanks in advance for your precious help.