FIlebeat terminating || service/service.go:56 Received sighup, stopping

Hi,

I am using filebeats 7.9.2 to send log data + internal collection metrics from our prod environment servers(Linux box) to ELK Stack (7.9.2) via 3 node Kafka Cluster(2.6.0).

But every time the filebeat stops/terminates automatically at random times.
I have to ask our DataCentre team to restart the filebeat process every time.

Filebeat DEBUG Log shows SIGHUP was received. (shown below)
Any advice what checks to be done, and how to solve the case?

'2020-09-30T14:12:18.083Z DEBUG [monitoring] memqueue/eventloop.go:255 no state set
2020-09-30T14:12:18.083Z DEBUG [monitoring] memqueue/eventloop.go:228 handle ACK took: 25.859µs
2020-09-30T14:12:18.083Z DEBUG [monitoring] memqueue/ackloop.go:128 ackloop: return ack to broker loop:1
2020-09-30T14:12:18.083Z DEBUG [monitoring] memqueue/ackloop.go:131 ackloop: done send ack
2020-09-30T14:12:32.051Z DEBUG [service] service/service.go:56 Received sighup, stopping
2020-09-30T14:12:32.051Z INFO beater/filebeat.go:515 Stopping filebeat
2020-09-30T14:12:32.052Z INFO beater/crawler.go:148 Stopping Crawler
2020-09-30T14:12:32.052Z INFO beater/crawler.go:158 Stopping 1 inputs
2020-09-30T14:12:32.052Z INFO [crawler] beater/crawler.go:163 Stopping input: 678850850155029664
2020-09-30T14:12:32.052Z INFO input/input.go:136 input ticker stopped
2020-09-30T14:12:32.052Z DEBUG [reader_multiline] multiline/pattern.go:141 Multiline event flushed because timeout reached.
2020-09-30T14:12:32.052Z INFO log/harvester.go:326 Reader was closed: /logs/project1/client1/prod/MSG3/project1/ireslog.log. Closing.
2020-09-30T14:12:32.052Z DEBUG [harvester] log/harvester.go:601 Stopping harvester for file: /logs/project1/client1/prod/MSG3/project1/ireslog.log
2020-09-30T14:12:32.052Z DEBUG [harvester] log/harvester.go:611 Closing file: /logs/project1/client1/prod/MSG3/project1/ireslog.log
2020-09-30T14:12:32.052Z DEBUG [harvester] log/harvester.go:485 Update state: /logs/project1/client1/prod/MSG3/project1/ireslog.log, offset: 5811398
2020-09-30T14:12:32.052Z DEBUG [harvester] log/harvester.go:622 harvester cleanup finished for file: /logs/project1/client1/prod/MSG3/project1/ireslog.log
2020-09-30T14:12:32.052Z DEBUG [publisher] pipeline/client.go:157 client: closing acker
2020-09-30T14:12:32.052Z DEBUG [publisher] pipeline/client.go:162 client: done closing acker
2020-09-30T14:12:32.052Z DEBUG [publisher] pipeline/client.go:164 client: unlink from queue
2020-09-30T14:12:32.052Z DEBUG [publisher] pipeline/client.go:177 client: cancelled 0 events
2020-09-30T14:12:32.052Z DEBUG [publisher] pipeline/client.go:166 client: done unlink
2020-09-30T14:12:32.052Z INFO beater/crawler.go:178 Crawler stopped
2020-09-30T14:12:32.052Z INFO [registrar] registrar/registrar.go:132 Stopping Registrar
2020-09-30T14:12:32.052Z INFO [registrar] registrar/registrar.go:166 Ending Registrar
2020-09-30T14:12:32.052Z DEBUG [registrar] registrar/registrar.go:167 Stopping Registrar
2020-09-30T14:12:32.052Z INFO [registrar] registrar/registrar.go:137 Registrar stopped
2020-09-30T14:12:32.053Z INFO [monitoring] log/log.go:153 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2970,"time":{"ms":2970}},"total":{"ticks":12290,"time":{"ms":12299},"value":12290},"user":{"ticks":9320,"time":{"ms":9329}}},"handles":{"limit":{"hard":65536,"soft":65536},"open":17},"info":{"ephemeral_id":"2bc5b7b9-3366-4c20-b210-814f1f13c6a9","uptime":{"ms":35324088}},"memstats":{"gc_next":30461344,"memory_alloc":18236280,"memory_total":1551118496,"rss":51933184},"runtime":{"goroutines":48}},"filebeat":{"events":{"added":7091,"done":7091},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":7087,"batches":1176,"total":7087},"type":"kafka"},"outputs":{"kafka":{"bytes_read":255830,"bytes_write":3492655}},"pipeline":{"clients":0,"events":{"active":0,"filtered":4,"published":7087,"retry":15,"total":7091},"queue":{"acked":7087}}},"registrar":{"states":{"cleanup":1,"current":1,"update":7091},"writes":{"success":1180,"total":1180}},"system":{"cpu":{"cores":8},"load":{"1":0.62,"15":0.76,"5":0.73,"norm":{"1":0.0775,"15":0.095,"5":0.0913}}}}}}
2020-09-30T14:12:32.054Z INFO [monitoring] log/log.go:154 Uptime: 9h48m44.089134065s
2020-09-30T14:12:32.054Z INFO [monitoring] log/log.go:131 Stopping metrics logging.
2020-09-30T14:12:32.054Z DEBUG [monitoring] pipeline/client.go:157 client: closing acker
2020-09-30T14:12:32.054Z DEBUG [monitoring] pipeline/client.go:162 client: done closing acker
2020-09-30T14:12:32.054Z DEBUG [monitoring] pipeline/client.go:164 client: unlink from queue
2020-09-30T14:12:32.054Z DEBUG [monitoring] pipeline/client.go:177 client: cancelled 0 events
2020-09-30T14:12:32.054Z DEBUG [monitoring] pipeline/client.go:166 client: done unlink
2020-09-30T14:12:32.054Z DEBUG [monitoring] pipeline/pipeline.go:200 close pipeline
2020-09-30T14:12:32.054Z INFO [api] api/server.go:66 Stats endpoint (10.196.31.72:5066) finished: accept tcp 10.196.31.72:5066: use of closed network connection
2020-09-30T14:12:32.054Z INFO [monitoring] elasticsearch/elasticsearch.go:280 Stop monitoring stats metrics snapshot loop.
2020-09-30T14:12:32.054Z INFO [monitoring] elasticsearch/elasticsearch.go:280 Stop monitoring state metrics snapshot loop.
2020-09-30T14:12:32.055Z INFO instance/beat.go:456 filebeat stopped.'

Welcome to our community! :smiley:

SIGHUP would likely mean something is asking the process to stop. What else is running on the host that might cause this?

We have 3 Weblogic Managed servers(Prod) running out of that VM
And our DC team has auditBeat running as well to their own monitoring ELK Stack for auditing.

I have asked for them to check if they killed/terminated any process, including my filebeat at the time.
Anything I could check additionally from my end?

Hi,
Issue resolved with disown PID command.
Seems if SSH terminal was closed after starting the filebeat process, SIGUP was send to the filebeat PID.
FIlebeats are running now without any issue.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.