Filebeat preventing creation of new logs in rotation?

My software currently cycles logs by gzipping the filled log and creating a new one with the same name.

I have a Filebeat harvesting from the unzipped log only. The beat happily reads in the unzipped log as it is being written, and closes with the message Closing because close_removed is enabled. when the log is zipped. This behavior is expected and not in itself problematic.

What does become an issue is that my software fails to create a new log at this point, despite the fact that Filebeat supposedly has closed the harvester.

I don't currently have the ability to debug the logging software on the write end, but I thought I'd inquire as to how Filebeat manages opened files and if the solution could lie within the Filebeat configuration.

Filebeat identifies files using their FDs. It monitors the files which needs to be read at a configured interval. close_* options are applied once Filebeat reaches EOF of a file.

Can you logging software create a file after Filebeat started to monitor the path, but the file is not yet created?
What's the output of lsof -c filebeat and lsof -c {{ your-logging-sw }}?
Could you please share your configuration and format it using </>? Please also attach debug logs.

(never mentioned, but I am currently running Filebeat on Windows)

Yes, the software can create a file after Filebeat starts monitoring the path.

I've run Process Explorer in lieu of lsof and it reveals that neither the software nor filebeat have the file open after the log is zipped.

The configuration is as generic as possible: a single prospector with two paths and no further specifications (besides output).

2018-09-06T16:23:00.587+0100    INFO    [monitoring]    log/log.go:141  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":625,"time":{"ms":63}},"total":{"ticks":4687,"time":{"ms":813},"value":4687},"user":{"ticks":4062,"time":{"ms":750}}},"info":{"ephemeral_id":"826d9a5d-48f2-415a-8f00-63e388c58d0c","uptime":{"ms":270179}},"memstats":{"gc_next":28841584,"memory_alloc":15767008,"memory_total":3065278312,"rss":131072}},"filebeat":{"events":{"active":-1,"added":28,"done":29},"harvester":{"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":29,"active":-1,"batches":23,"total":28},"read":{"bytes":144},"write":{"bytes":2138689}},"pipeline":{"clients":1,"events":{"active":1,"published":28,"total":28},"queue":{"acked":29}}},"registrar":{"states":{"current":2,"update":29},"writes":{"success":24,"total":24}}}}}
2018-09-06T16:23:30.532+0100    INFO    log/harvester.go:268    File was removed: C:\Users\path\to\logname.log. Closing because close_removed is enabled.
2018-09-06T16:23:30.587+0100    INFO    [monitoring]    log/log.go:141  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":750,"time":{"ms":125}},"total":{"ticks":5546,"time":{"ms":859},"value":5546},"user":{"ticks":4796,"time":{"ms":734}}},"info":{"ephemeral_id":"826d9a5d-48f2-415a-8f00-63e388c58d0c","uptime":{"ms":300179}},"memstats":{"gc_next":42513424,"memory_alloc":31938504,"memory_total":3658797352,"rss":53248}},"filebeat":{"events":{"active":-1,"added":266,"done":267},"harvester":{"closed":1,"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":266,"batches":25,"total":266},"read":{"bytes":150},"write":{"bytes":1920946}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"published":265,"total":266},"queue":{"acked":266}}},"registrar":{"states":{"current":2,"update":267},"writes":{"success":26,"total":26}}}}}

If I had access to the software code I'd see if the issue could be sidestepped by having the log file copied and cleared, rather than zipped and recreated. I'll pursue this route further.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.