Filebeat service is not stopping

From time to time, attempts to stop filebeat are failing.

  • version: 7.11.1
  • running as a windows service
  • command to stop filebeat:
    sc stop filebeat

filebeat windows service stops fine most of the time.
From time to time, it takes long to actually stop. And this time, it did not stop. Here is the snippet from the log file:

2021-12-07T20:05:51.823-0500	INFO	beater/filebeat.go:515	Stopping filebeat
2021-12-07T20:05:51.823-0500	INFO	beater/crawler.go:148	Stopping Crawler
2021-12-07T20:05:51.823-0500	INFO	beater/crawler.go:158	Stopping 1 inputs
2021-12-07T20:05:51.823-0500	INFO	cfgfile/reload.go:227	Dynamic config reloader stopped
2021-12-07T20:05:51.825-0500	INFO	log/harvester.go:302	Harvester started for file: D:\ITSLogs\BG\tomcat\system.log.5
2021-12-07T20:05:51.827-0500	INFO	log/harvester.go:302	Harvester started for file: D:\ITS\ITS\log\itslog.log
2021-12-07T20:05:51.836-0500	INFO	[crawler]	beater/crawler.go:163	Stopping input: 5800241607937860490
2021-12-07T20:05:51.836-0500	INFO	log/harvester.go:302	Harvester started for file: D:\ITSLogs\BG\tomcat\system.log
2021-12-07T20:05:51.840-0500	INFO	log/harvester.go:302	Harvester started for file: D:\ITSLogs\BG\tomcat\system.log.4
2021-12-07T20:05:51.841-0500	INFO	log/harvester.go:302	Harvester started for file: D:\ITSLogs\BG\tomcat\system.log.3
2021-12-07T20:06:01.326-0500	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5062,"time":{"ms":78}},"total":{"ticks":22452,"time":{"ms":609},"value":22452},"user":{"ticks":17390,"time":{"ms":531}}},"handles":{"open":191},"info":{"ephemeral_id":"4edd818f-8e5c-49d3-9c8a-82923195e3d2","uptime":{"ms":82380391}},"memstats":{"gc_next":70724992,"memory_alloc":35847760,"memory_sys":66353144,"memory_total":1452944472,"rss":111443968},"runtime":{"goroutines":81}},"filebeat":{"events":{"active":5,"added":5},"harvester":{"open_files":9,"running":9,"started":5},"input":{"log":{"files":{"truncated":2}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":8},"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":17}}}}}
2021-12-07T20:06:31.328-0500	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5062},"total":{"ticks":22452,"value":22452},"user":{"ticks":17390}},"handles":{"open":191},"info":{"ephemeral_id":"4edd818f-8e5c-49d3-9c8a-82923195e3d2","uptime":{"ms":82410391}},"memstats":{"gc_next":70724992,"memory_alloc":35919112,"memory_total":1453015824,"rss":103407616},"runtime":{"goroutines":81}},"filebeat":{"harvester":{"open_files":9,"running":9}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":8},"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":17}}}}}
2021-12-07T20:07:01.331-0500	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5062},"total":{"ticks":22452,"value":22452},"user":{"ticks":17390}},"handles":{"open":189},"info":{"ephemeral_id":"4edd818f-8e5c-49d3-9c8a-82923195e3d2","uptime":{"ms":82440391}},"memstats":{"gc_next":70724992,"memory_alloc":35975240,"memory_total":1453071952,"rss":103407616},"runtime":{"goroutines":81}},"filebeat":{"harvester":{"open_files":9,"running":9}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":8},"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":17}}}}}
2021-12-07T20:07:31.330-0500	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5062},"total":{"ticks":22452,"value":22452},"user":{"ticks":17390}},"handles":{"open":189},"info":{"ephemeral_id":"4edd818f-8e5c-49d3-9c8a-82923195e3d2","uptime":{"ms":82470391}},"memstats":{"gc_next":70724992,"memory_alloc":36040024,"memory_total":1453136736,"rss":103407616},"runtime":{"goroutines":81}},"filebeat":{"harvester":{"open_files":9,"running":9}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":8},"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":17}}}}}
2021-12-07T20:08:01.304-0500	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5062},"total":{"ticks":22483,"time":{"ms":31},"value":22483},"user":{"ticks":17421,"time":{"ms":31}}},"handles":{"open":189},"info":{"ephemeral_id":"4edd818f-8e5c-49d3-9c8a-82923195e3d2","uptime":{"ms":82500391}},"memstats":{"gc_next":67293824,"memory_alloc":33640320,"memory_total":1453188000,"rss":83935232},"runtime":{"goroutines":81}},"filebeat":{"harvester":{"open_files":9,"running":9}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":8},"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":17}}}}}

From the above, you could see, that the stop process got initiated, but something caused the harvester to start and the stop got interrupted.

Is there anything that needs to be done to force the service to stop ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.