Filebeat not harvesting new lines on Windows 2008 R2/IIS

We are running in to a problem where filebeat is only harvesting lines on startup or when new files are created. Once it reads to the current "end of file" of an existing log, no new entries are harvested despite the log file being continuously written to.

Filebeat version: 6.2.3
OS: Windows 2008 R2
Log File: IIS

You can see in the filebeat log below that the offset value never increases.

Note: We are not using the IIS module here as we do some custom parsing of cookies and other data in the log file.

Service Command line: "C:\Program Files\filebeat\filebeat-6.2.3-windows-x86_64\\filebeat.exe" -c "C:\Program Files\filebeat\filebeat-6.2.3-windows-x86_64\\filebeat.yml" -path.home "C:\Program Files\filebeat\filebeat-6.2.3-windows-x86_64" -path.data "C:\\ProgramData\\filebeat" -path.logs "C:\\ProgramData\\filebeat\logs"

filebeat.yml:

filebeat.prospectors: []
filebeat.modules: []
filebeat.registry_file: C:/Program Files/filebeat/filebeat-6.2.3-windows-x86_64/registry
filebeat.config_dir: C:/Program Files/filebeat/filebeat-6.2.3-windows-x86_64/conf.d
logging.files:
  path: C:/Program Files/filebeat/filebeat-6.2.3-windows-x86_64/logs
tags:
- SERVER-NAME
logging.to_files: true
logging.level: debug
output.logstash:
  hosts:
  - logstash02:5044
  - logstash01:5044
  enabled: true
  loadbalance: true

prospector.yml in conf.d:

filebeat:
  prospectors:
  - paths:
    - C:/inetpub/logs/LogFiles/W3SVC1/*.log
    encoding: utf-8
    tags:
    - IISlog
    ignore_older: 24h
    close_inactive: 2h
    scan_frequency: 1m
    enabled: true

filebeat log:

2018-12-10T23:00:49.981+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log; Backoff now.
....
2018-12-10T23:06:47.812+0100	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-12-10T23:06:47.812+0100	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-12-10T23:06:47.813+0100	DEBUG	[prospector]	log/prospector.go:361	Check file for harvesting: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log
2018-12-10T23:06:47.813+0100	DEBUG	[prospector]	log/prospector.go:447	Update existing file for harvesting: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log, offset: 389900
2018-12-10T23:06:47.813+0100	DEBUG	[prospector]	log/prospector.go:499	Harvester for file is still running: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log
2018-12-10T23:06:47.813+0100	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 6, After: 6
2018-12-10T23:07:17.681+0100	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":234,"time":234},"total":{"ticks":1405,"time":1405,"value":1405},"user":{"ticks":1171,"time":1171}},"info":{"ephemeral_id":"67b81927-ae3e-4ad9-8646-21b25dc90336","uptime":{"ms":390063}},"memstats":{"gc_next":19506976,"memory_alloc":9751504,"memory_total":112107224,"rss":40960}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":6}}}}}
2018-12-10T23:07:47.595+0100	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":234,"time":234},"total":{"ticks":1405,"time":1405,"value":1405},"user":{"ticks":1171,"time":1171}},"info":{"ephemeral_id":"67b81927-ae3e-4ad9-8646-21b25dc90336","uptime":{"ms":420062}},"memstats":{"gc_next":19506976,"memory_alloc":9797848,"memory_total":112153568}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":6}}}}}
2018-12-10T23:07:47.644+0100	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-12-10T23:07:47.644+0100	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-12-10T23:07:47.645+0100	DEBUG	[prospector]	log/prospector.go:361	Check file for harvesting: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log
2018-12-10T23:07:47.645+0100	DEBUG	[prospector]	log/prospector.go:447	Update existing file for harvesting: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log, offset: 389900
2018-12-10T23:07:47.645+0100	DEBUG	[prospector]	log/prospector.go:499	Harvester for file is still running: C:\inetpub\logs\LogFiles\W3SVC1\u_ex18121022.log
2018-12-10T23:07:47.645+0100	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 6, After: 6
2018-12-10T23:08:17.515+0100	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":234,"time":234},"total":{"ticks":1405,"time":1405,"value":1405},"user":{"ticks":1171,"time":1171}},"info":{"ephemeral_id":"67b81927-ae3e-4ad9-8646-21b25dc90336","uptime":{"ms":450065}},"memstats":{"gc_next":19506976,"memory_alloc":9876416,"memory_total":112232136,"rss":-13217792}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":6}}}}}

The log shows state metrics for 6 files but the other 5 are old logs with no new data so I cut those out.

Hmm. this definitely looks odd. As an experiment, could you set https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#_literal_backoff_factor_literal to 1?

I set backup_factor to 1 without any improvement.

Another interesting thing to note is that I can reproduce this with all of my Windows 2008 R2 IIS servers, but the same configuration (yml + filebeats version) works on Windows 2016.

New information for today...

If I disable my logstash output, and switch to output.file - everything works as expected. New lines from IIS are harvested an output to a file. Success but not very useful.

So my question is then why does filebeats stop after sending the initial 2048 events to Logstash? I do not see any errors logged in either filebeats or logstash when debugging, so what am I missing?

I set bulk_max_size to 512 which the same result more or less.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.