Metricbeat 7.9.2 is shutdown silently

Dear Metricbeat experts,

I found metricbeat 7.9.2 was shutdown sliently, I enable debug log, from log, it said service/service.go:56 Received sighup, stopping. I started metricbeat using root user, I search the history command, there is no command to kill metricbeat process.

Pls feel free to let me know if I need to provide additional info, thx in advance!

My use case: use metricbeat to streaming kafka metrics to elasticsearch server.

BTW, when I use metricbeat 7.6.2 before, it was shutdown silently as well.

2020-10-22T05:58:47.041Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 50 events have been published to elasticsearch in 23.594621ms.
2020-10-22T05:58:47.048Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 4 events have been published to elasticsearch in 6.44271ms.
2020-10-22T05:58:47.059Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 50 events have been published to elasticsearch in 41.440767ms.
2020-10-22T05:58:47.059Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171942: 0, 50]
2020-10-22T05:58:47.059Z        DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:50
2020-10-22T05:58:47.059Z        DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack
2020-10-22T05:58:47.098Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 50 events have been published to elasticsearch in 80.098984ms.
2020-10-22T05:58:47.101Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 50 events have been published to elasticsearch in 83.922879ms.
2020-10-22T05:58:47.105Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 50 events have been published to elasticsearch in 87.20375ms.
2020-10-22T05:58:47.105Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171943: 0, 50]
2020-10-22T05:58:47.105Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171944: 0, 50]
2020-10-22T05:58:47.105Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171945: 0, 50]
2020-10-22T05:58:47.105Z        DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:150
2020-10-22T05:58:47.105Z        DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack
2020-10-22T05:58:47.121Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 50 events have been published to elasticsearch in 103.569262ms.
2020-10-22T05:58:47.121Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171946: 0, 50]
2020-10-22T05:58:47.121Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171947: 0, 50]
2020-10-22T05:58:47.121Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [171948: 0, 4]
2020-10-22T05:58:47.121Z        DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:104
2020-10-22T05:58:47.121Z        DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack
2020-10-22T05:58:56.747Z        DEBUG   [service]       service/service.go:56   Received sighup, stopping
2020-10-22T05:58:56.747Z        INFO    cfgfile/reload.go:227   Dynamic config reloader stopped
2020-10-22T05:58:56.747Z        INFO    [reload]        cfgfile/list.go:124     Stopping 4 runners ...
2020-10-22T05:58:56.747Z        DEBUG   [reload]        cfgfile/list.go:135     Stopping runner: RunnerGroup{aws [metricsets=1]}

Thanks,
Roy

Can you share your metricbeat config and provide a slightly longer log? How long it was running before being shutdown?

Dear @Mario_Castro,

Sorry...I didn't save the log. Regarding "How long it was running before being shutdown", sometimes 2 hours, sometimes 5 hours.

Regarding the log, will provide the longer log when I reproduce the issue again, thx!

MetricBeat Config

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml

  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.template.name: "metricbeat"
setup.template.pattern: "metricbeat-*"
setup.ilm.enabled: false
setup.kibana:
  host: "x.x.x.x:5601"
  username: "elastic"
  password: "xxxxxxx" 
output.elasticsearch:
  hosts: ["x.x.x.x:9200","x.x.x.x:9200","x.x.x.x:9200","x.x.x.x:9200","x.x.x.x:9200","x.x.x.x:9200"]
  indices:
    - index: "metricbeat-%{[prometheus.labels.job]}-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.has_fields:
        has_fields: ['prometheus.labels.job']
    - index: "metricbeat-aws.cloudwatch-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.has_fields:
        has_fields: ['aws.cloudwatch.namespace']
  protocol: "http"
  username: "elastic"
  password: "xxx"
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
logging.level: debug
logging.to_files: true
logging.files:
  path: /server/metricbeat/metricbeat/logs
  name: metricbeat.log
  keepfiles: 7
  permissions: 0644

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.