Filebeat agent seems to die/exit when a log file its watching rotates

I'm probably doing something obviously wrong here, in a nutshell is there anything special you need to do to handle log rotation of the logs being shipped by filebeat? i would have thought it'd be smart enough to handle this out of the box, so to speak. These are just typical nginx/apache access/error logs. Nothing out of the ordinary (to my eyes) is getting logged when this occurs either. agent simply exits.

2018-03-29T17:16:23.192Z	DEBUG	[harvester]	log/harvester.go:489	harvester cleanup finished for file: /var/log/nginx/access.log
2018-03-29T17:16:23.192Z	DEBUG	[harvester]	log/harvester.go:489	harvester cleanup finished for file: /var/log/nginx/xxxxxxx_access.log
2018-03-29T17:16:23.192Z	INFO	crawler/crawler.go:135	Crawler stopped
2018-03-29T17:16:23.192Z	INFO	registrar/registrar.go:210	Stopping Registrar
2018-03-29T17:16:23.194Z	DEBUG	[registrar]	registrar/registrar.go:253	Registry file updated. 10 states written.
2018-03-29T17:16:23.194Z	INFO	registrar/registrar.go:165	Ending Registrar
2018-03-29T17:16:23.194Z	DEBUG	[registrar]	registrar/registrar.go:228	Write registry file: /var/lib/filebeat/registry
2018-03-29T17:16:23.196Z	DEBUG	[registrar]	registrar/registrar.go:253	Registry file updated. 10 states written.
2018-03-29T17:16:23.196Z	INFO	instance/beat.go:308	filebeat stopped.
2018-03-29T17:16:23.198Z	INFO	[monitoring]	log/log.go:132	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":20},"total":{"ticks":80,"time":84,"value":80},"user":{"ticks":60,"time":64}},"info":{"ephemeral_id":"d3867ffa-7379-4ba0-bf07-9706ce3f6fbd","uptime":{"ms":63}},"memstats":{"gc_next":4194304,"memory_alloc":2491200,"memory_total":6954800,"rss":25440256}},"filebeat":{"events":{"active":397,"added":410,"done":13},"harvester":{"closed":4,"open_files":0,"running":0,"started":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":392,"filtered":18,"published":392,"total":410}}},"registrar":{"states":{"current":10,"update":13},"writes":14},"system":{"cpu":{"cores":4},"load":{"1":0.72,"15":0.89,"5":0.85,"norm":{"1":0.18,"15":0.2225,"5":0.2125}}}}}}
2018-03-29T17:16:23.198Z	INFO	[monitoring]	log/log.go:133	Uptime: 63.606153ms
2018-03-29T17:16:23.198Z	INFO	[monitoring]	log/log.go:110	Stopping metrics logging.
2018-03-29T17:16:23.202Z	ERROR	instance/beat.go:667	Exiting: Error in initing prospector: Can only start a prospector when all related states are finished: {Id: Finished:false Fileinfo:0xc42041e8f0 Source:/var/log/nginx/access.log Offset:3709 Timestamp:2018-03-29 17:16:23.178816475 +0000 UTC m=+0.048943884 TTL:-1ns Type:log FileStateOS:1041418-51713}

Which filebeat version are you using?

The error message is not related to file rotation, but to prospector/module reloading.

filebeat v6.2.3

Can you also share your configuration?

It seems pretty simple/straightforward to me, but maybe that's the problem. I will note that previously
I also had the nginx module enabled and nginx access/error logs also included but removed them to see
if it made any difference. They dont seem to die as consistently now, but they do still die.

  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

- module: apache2
    enabled: true
    enabled: true

- type: log
    - "/var/log/apache2/*access.log"
  document_type: apache-access
  exclude_files: ['\.gz$', 'healthcheck']

- type: log
    - "/var/log/apache2/*error.log"
  document_type: apache-error
  exclude_files: ['\.gz$', 'healthcheck']

  host: ""

  hosts: [""]
  protocol: "http"

logging.level: info

Just skimming your configuration I see you are having apache module + a prospector collecting apache logs. This config will result in at least 2 prospectors trying to collect the very same. One of them dies, cause it can not start up, due to the first prospector having an 'internal' lock on the file.

Either use the apache modules or do the processing yourself using prospectors. Removing the prospector configuration should help.

ah ok, that makes sense. i thought you had to enable the module and also point filebeat at the logs. by default is it going to snarf in every logfile that exists in /var/log/apache? there is one specific log file in there I'd rather not have it ingest. wasn't sure how to go about that w/o explicitly adding them via the prospectors.

nm i RTFM - thanks for the response though

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.