Hi, In my environment, I have installed Filebeat 6.6. I have a huge volume of log files written and with log roll over policy on, I see the files are being created as
serviceaudit.log.0, serviceaudit.log.1, and so on (highest index being the latest).
Issue is when my log file rolls over, Harvester is started for each of the new files. Like,
2019-04-05T10:39:43.041+0700 INFO log/harvester.go:255 Harvester started for file: serviceaudit.log.0
2019-04-05T10:39:43.044+0700 INFO log/harvester.go:255 Harvester started for file: serviceaudit.log.1
2019-04-05T10:39:43.047+0700 INFO log/harvester.go:255 Harvester started for file: serviceaudit.log.3
But the files are never closed even though I have close_inactive= 2m set in config. And when the older files are removed/deleted, Filebeat user still holds the files. And disc space never decreasing.
filebeat 58090 58134 fbuser 67r REG 253,5 1074196993 1646411 serviceaudit.log.0 (deleted)
filebeat 58090 58134 fbuser 86r REG 253,5 1074147030 1646433 serviceaudit.log.1 (deleted)
filebeat 58090 58135 fbuser 67r REG 253,5 1074196993 1646411 serviceaudit.log.3 (deleted)
Here is my filebeat.yml config which is pretty much as per the recommendations and what other users have done.
filebeat.inputs:
-
type: log
enabled: true
paths:
- /logapp/audit/serviceaudit*.log
- /logapp/audit/serviceerroraudit*.log
close_inactive: 2m
close_removed: true
close_renamed: true
scan_frequency: 10s