Dear Metricbeat experts,
I found metricbeat 7.9.2 was shutdown sliently, I enable debug log, from log, it said service/service.go:56 Received sighup, stopping
. I started metricbeat using root user, I search the history command, there is no command to kill metricbeat process.
Pls feel free to let me know if I need to provide additional info, thx in advance!
My use case: use metricbeat to streaming kafka metrics to elasticsearch server.
BTW, when I use metricbeat 7.6.2 before, it was shutdown silently as well.
2020-10-22T05:58:47.041Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 50 events have been published to elasticsearch in 23.594621ms.
2020-10-22T05:58:47.048Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 4 events have been published to elasticsearch in 6.44271ms.
2020-10-22T05:58:47.059Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 50 events have been published to elasticsearch in 41.440767ms.
2020-10-22T05:58:47.059Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171942: 0, 50]
2020-10-22T05:58:47.059Z DEBUG [publisher] memqueue/ackloop.go:128 ackloop: return ack to broker loop:50
2020-10-22T05:58:47.059Z DEBUG [publisher] memqueue/ackloop.go:131 ackloop: done send ack
2020-10-22T05:58:47.098Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 50 events have been published to elasticsearch in 80.098984ms.
2020-10-22T05:58:47.101Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 50 events have been published to elasticsearch in 83.922879ms.
2020-10-22T05:58:47.105Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 50 events have been published to elasticsearch in 87.20375ms.
2020-10-22T05:58:47.105Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171943: 0, 50]
2020-10-22T05:58:47.105Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171944: 0, 50]
2020-10-22T05:58:47.105Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171945: 0, 50]
2020-10-22T05:58:47.105Z DEBUG [publisher] memqueue/ackloop.go:128 ackloop: return ack to broker loop:150
2020-10-22T05:58:47.105Z DEBUG [publisher] memqueue/ackloop.go:131 ackloop: done send ack
2020-10-22T05:58:47.121Z DEBUG [elasticsearch] elasticsearch/client.go:229 PublishEvents: 50 events have been published to elasticsearch in 103.569262ms.
2020-10-22T05:58:47.121Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171946: 0, 50]
2020-10-22T05:58:47.121Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171947: 0, 50]
2020-10-22T05:58:47.121Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [171948: 0, 4]
2020-10-22T05:58:47.121Z DEBUG [publisher] memqueue/ackloop.go:128 ackloop: return ack to broker loop:104
2020-10-22T05:58:47.121Z DEBUG [publisher] memqueue/ackloop.go:131 ackloop: done send ack
2020-10-22T05:58:56.747Z DEBUG [service] service/service.go:56 Received sighup, stopping
2020-10-22T05:58:56.747Z INFO cfgfile/reload.go:227 Dynamic config reloader stopped
2020-10-22T05:58:56.747Z INFO [reload] cfgfile/list.go:124 Stopping 4 runners ...
2020-10-22T05:58:56.747Z DEBUG [reload] cfgfile/list.go:135 Stopping runner: RunnerGroup{aws [metricsets=1]}
Thanks,
Roy