Entire log is read when it changes

Hi I am using filebeats and I am having some problems with my backup log file. This file is written to once a day. The problem is that when this log file is updated the entire file is read and sent to logstash twice. The other log files appear to be sending data correctly, however, they are written to much more often. Any ideas on why this is happening or how to debug it?

Incase it helps here is the prospector snippet:
-
paths:
- /var/log/backup.log
fields:
type: backup
server: rg_u16_prod_db_slave
env: rg_production
application_env: production
chef_roles: ["server", "mysql_db_slave"]
scan_frequency: "60s"
backoff: "1s"

Hi @joshsmoore,

How is your backup file written? New content is appended or content is replaced every time it is written?

What version of filebeat are you using?

You can check in the filebeat logs if you see any issue regarding this file.

I think it is just appended because the inode number stays the same. I do not see any problems in the log and I am running filebeats 6.3.0

I tried to get some more information by deleting the old log and just looking at the new entries. However, I am still reading in 6000 lines everytime the file is changed. The interesting thing is that log items are being added that do not exist in the file. So I am wondering if somehow these lines are getting stuck in logstash. Where the lines is written to elasticsearch but logstash does not think it has written it. Is this possible?

I found the problem. The utility that was writing the log copied the log to a new file truncated and copied back. That was the problem.

Thanks,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.