Hi I am using filebeats and I am having some problems with my backup log file. This file is written to once a day. The problem is that when this log file is updated the entire file is read and sent to logstash twice. The other log files appear to be sending data correctly, however, they are written to much more often. Any ideas on why this is happening or how to debug it?
Incase it helps here is the prospector snippet:
-
paths:
- /var/log/backup.log
fields:
type: backup
server: rg_u16_prod_db_slave
env: rg_production
application_env: production
chef_roles: ["server", "mysql_db_slave"]
scan_frequency: "60s"
backoff: "1s"
I tried to get some more information by deleting the old log and just looking at the new entries. However, I am still reading in 6000 lines everytime the file is changed. The interesting thing is that log items are being added that do not exist in the file. So I am wondering if somehow these lines are getting stuck in logstash. Where the lines is written to elasticsearch but logstash does not think it has written it. Is this possible?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.