Filebeat skipping log files

Hello,

Recently I have noticed filebeat has been skipping some log files completely. From the logs it seems as if logbeat thinks it's already consumed the log in question. Which is impossible, as the log didn't even exist the last time filebeats was run.

Please see the filebeat log output from this evenings test:

2016/08/29 16:48:24.959850 registrar.go:111: INFO States Loaded from registrar: 36
2016/08/29 16:48:24.959912 crawler.go:35: INFO Loading Prospectors: 1
2016/08/29 16:48:24.960000 prospector_log.go:38: DBG  exclude_files: []
2016/08/29 16:48:24.960017 prospector_log.go:40: INFO Load previous states from registry into memory
2016/08/29 16:48:24.960059 prospector_log.go:53: INFO Previous states loaded: 36
2016/08/29 16:48:24.960088 publish.go:101: INFO Start sending events to output
2016/08/29 16:48:24.960147 prospector.go:55: DBG  File Configs: [/var/log/pmta/eu1dme-pmta07p/*.csv /var/log/pmta/eu1dme-pmta08p/*.csv /var/log/pmta/eu1dme-pmta09p/*.csv]
2016/08/29 16:48:24.960165 crawler.go:47: INFO Loading Prospectors completed. Number of prospectors: 1
2016/08/29 16:48:24.960175 crawler.go:62: INFO All prospectors are initialised and running with 36 states to persist
2016/08/29 16:48:24.960195 crawler.go:57: DBG  Starting prospector 0
2016/08/29 16:48:24.960202 prospector.go:94: INFO Starting prospector of type: log
2016/08/29 16:48:24.960210 prospector_log.go:57: DBG  Start next scan
2016/08/29 16:48:24.960210 spooler.go:67: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/08/29 16:48:24.960064 registrar.go:173: INFO Starting Registrar
2016/08/29 16:48:24.960390 prospector_log.go:140: DBG  Check file for harvesting: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv
2016/08/29 16:48:24.960410 prospector_log.go:170: DBG  Update existing file for harvesting: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv, offset: 52428891
2016/08/29 16:48:24.960424 prospector_log.go:179: DBG  Resuming harvesting of file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv, offset: 52428891
2016/08/29 16:48:24.960664 log.go:217: DBG  Set previous offset for file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv. Offset: 52428891 
2016/08/29 16:48:24.960682 log.go:207: DBG  Setting offset for file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv. Offset: 52428891 
2016/08/29 16:48:24.960697 log.go:45: INFO Harvester started for file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv
2016/08/29 16:48:24.960733 log.go:234: DBG  Update state: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv, offset: 52428891
2016/08/29 16:48:24.960820 log_file.go:83: DBG  End of file reached: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv; Backoff now.

I must point out that log "/var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv" has never been seen by filebeat before - it was only genertaed this morning, and I last ran filebeat last night (i.e. before this log even existed).

So, why would filebeat, never having seen this log before announce in it's first few log lines:

2016/08/29 16:48:24.960390 prospector_log.go:140: DBG  Check file for harvesting: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv
2016/08/29 16:48:24.960410 prospector_log.go:170: DBG  Update existing file for harvesting: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv, offset: 52428891
2016/08/29 16:48:24.960424 prospector_log.go:179: DBG  Resuming harvesting of file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv, offset: 52428891
2016/08/29 16:48:24.960664 log.go:217: DBG  Set previous offset for file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv. Offset: 52428891 
2016/08/29 16:48:24.960682 log.go:207: DBG  Setting offset for file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv. Offset: 52428891 
2016/08/29 16:48:24.960697 log.go:45: INFO Harvester started for file: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv
2016/08/29 16:48:24.960733 log.go:234: DBG  Update state: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv, offset: 52428891
2016/08/29 16:48:24.960820 log_file.go:83: DBG  End of file reached: /var/log/pmta/eu1dme-pmta07p/acct-2016-08-29-0000.csv; Backoff now.

Suggesting it believed it had already ingested this log when it most certainly had not???

The day before, it did the same to 5 out of 10 new log files, today - just 1 out of 10...

Any ideas???

Thanks,

Tim.

It seems like you are hitting the inode reuse issue: https://github.com/elastic/beats/issues/1600

The problem is that filebeat identifies files based on inodes. In case your file system assigns the same inode to a new file after removing the old one, filebeat assumes it is the same file. This can especially happen if removal and creation of the file are very close together. To reduce this issue we introduced clean_* config options in filebeat 5.0. So in case a file is removed, also the state is removed from the registry file and the inode is available again for future files.

Thanks! - I figured it must be inode related, but didn't know about these config options. I will try them out.

Tim.

Great. Let me know in case you hit any issues with these config options. Be aware that these only exist in the most recent alpha release or the nightly builds: https://beats-nightlies.s3.amazonaws.com/index.html?prefix=filebeat/

This topic was automatically closed after 21 days. New replies are no longer allowed.