I'm having somewhat similar issue to other people here in discussions. Here's what happens:
When run manually with
sudo filebeat -c /etc/filebeat/filemeat.yml all works fine and filebeat continuously ships new events further (redis+logstash in my case).
However that is not the case when run as a service. It that instance, the service starts fine and even works for 2-3 scan cycles (im in loglevel=debug) then stops detecting new lines in logfile, so nothing gets shipped.
At that point a restart is required so the new log entries are picked up and shipped. After a restart it works again for a couple of scan cycles and then looses track of logfile again. (Just to reiterate: this does not happen when run manually)
Here's what I did to reproduce this behaviour:
- Dropped previous filebeat logs and restarted the service
- Appended some lines to a logfile, they went through
- Appended some more, they got sent as well
- On the third scan cycle filebeat stopped detecting changes
- Restarted filebeat (new logfile created)
- Filebeat picks up undetected lines from before and sends them over
I will attach both logfiles of filebeat working and failing, and then being restarted when it picks up old lines.
My Set up is pretty minimal:
filebeat: registry_file: /var/lib/filebeat/registry prospectors: - paths: - /var/log/nginx/*.log encoding: utf-8 input_type: log output: console: pretty: false redis: host: "elastic1.logging" port: 6379 password: "same_behaviour_without_redis_auth" index: "logstash" logging: level: debug to_syslog: false to_files: true files: path: /var/log/filebeat name: filebeat.log rotateeverybytes: 10485760 # = 10MB keepfiles: 7
(sorry for gist, but txt attachments are not allowed here)