Filebeat stopped automatically

Hi, everyone. I have two servers using filebeat-7.10.1. Today I found filebeat stoped automaticaly at the same time. They have worked normaly several days. We didn't kill it, the cpu or mem of the servers didn't have any changes at that time, the logstash also worked fine.
Why the filebeat stoped automatically?

The log when the filebeat stoped

2021-03-22T09:36:38.708+0800    INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":31053087778}},"memory":{"mem":{"usage":{"bytes":240500736}}}},"cpu":{"system":{"ticks":16530,"time":{"ms":2}},"total":{"ticks":69840,"time":{"ms":19},"value":69840},"user":{"ticks":53310,"time":{"ms":17}}},"handles":{"limit":{"hard":1000000,"soft":999000},"open":11},"info":{"ephemeral_id":"7f9380cd-4930-43a8-84d6-612ddfa6fff2","uptime":{"ms":235380082}},"memstats":{"gc_next":18607856,"memory_alloc":9626856,"memory_total":3577495536,"rss":147456},"runtime":{"goroutines":24}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":1.84,"15":1.52,"5":1.58,"norm":{"1":0.0575,"15":0.0475,"5":0.0494}}}}}}
2021-03-22T09:36:43.989+0800    INFO    beater/filebeat.go:515  Stopping filebeat
2021-03-22T09:36:43.990+0800    INFO    beater/crawler.go:148   Stopping Crawler
2021-03-22T09:36:43.990+0800    INFO    beater/crawler.go:158   Stopping 1 inputs
2021-03-22T09:36:43.990+0800    INFO    cfgfile/reload.go:227   Dynamic config reloader stopped
2021-03-22T09:36:43.993+0800    INFO    [crawler]       beater/crawler.go:163   Stopping input: 8445937580160083949
2021-03-22T09:36:43.993+0800    INFO    input/input.go:136      input ticker stopped
2021-03-22T09:36:43.993+0800    INFO    beater/crawler.go:178   Crawler stopped
2021-03-22T09:36:43.993+0800    INFO    [registrar]     registrar/registrar.go:132      Stopping Registrar
2021-03-22T09:36:43.993+0800    INFO    [registrar]     registrar/registrar.go:166      Ending Registrar
2021-03-22T09:36:43.994+0800    INFO    [registrar]     registrar/registrar.go:137      Registrar stopped
2021-03-22T09:36:43.996+0800    INFO    [monitoring]    log/log.go:153  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"sshd.service"},"cpuacct":{"id":"sshd.service","total":{"ns":34068148067116432}},"memory":{"id":"sshd.service","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":60514062336}}}},"cpu":{"system":{"ticks":16530,"time":{"ms":16533}},"total":{"ticks":69840,"time":{"ms":69850},"value":69840},"user":{"ticks":53310,"time":{"ms":53317}}},"handles":{"limit":{"hard":1000000,"soft":999000},"open":10},"info":{"ephemeral_id":"7f9380cd-4930-43a8-84d6-612ddfa6fff2","uptime":{"ms":235385370}},"memstats":{"gc_next":18607856,"memory_alloc":10113784,"memory_total":3577982464,"rss":43360256},"runtime":{"goroutines":12}},"filebeat":{"events":{"added":171,"done":171},"harvester":{"closed":2,"open_files":0,"running":0,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":1,"scans":1},"output":{"events":{"acked":166,"batches":9,"total":166},"read":{"bytes":66},"type":"logstash","write":{"bytes":16910}},"pipeline":{"clients":0,"events":{"active":0,"filtered":5,"published":166,"retry":86,"total":171},"queue":{"acked":166}}},"registrar":{"states":{"current":1,"update":171},"writes":{"success":14,"total":14}},"system":{"cpu":{"cores":32},"load":{"1":1.85,"15":1.53,"5":1.59,"norm":{"1":0.0578,"15":0.0478,"5":0.0497}}}}}}
2021-03-22T09:36:43.996+0800    INFO    [monitoring]    log/log.go:154  Uptime: 65h23m5.371158834s
2021-03-22T09:36:43.996+0800    INFO    [monitoring]    log/log.go:131  Stopping metrics logging.
2021-03-22T09:36:43.997+0800    INFO    instance/beat.go:461    filebeat stopped.

Here is my filebeat configuration.

filebeat.inputs:
- type: log
 enabled: true
 paths:
   #- /tmp/nginx/access.log
   - /data/nlu/api_gateway_logs/access.log
 fields:
   project_name: api-gateway-prd
   log_source: access
 #tail_files: true

filebeat.config.modules:
 path: ${path.config}/modules.d/*.yml
 reload.enabled: false
setup.template.settings:
 index.number_of_shards: 3

output.logstash:
 hosts: ["10.197.55.4:45044"]

processors:
- add_host_metadata: ~
- add_cloud_metadata: ~

The command of starting the filebeat
nohup filebeat -c /data/nlu/filebeat/filebeat.yml -e > /data/nlu/filebeat/filebeat.log 2>&1 &

1 Like

Hi!

I don't see any reason of why this can happen. It's really weird. Could it be that os sent a stop signal for some reason? Also you can consider running Filebeat as a linux service instead of running it in the background. I think this is will be more effective to avoid unwanted stops etc.

Thanks for you replying. I finally found the reason of problem. Today I closed the xshell terminal I started the nohup process. Then this operation stopped the nohup process.

To the viewers have the same problem. If you close the interactive shell you started nohup process, this operation will send SIGHUP signal to the shell's job list which including nohup process. Then the nohup process will stop.
To solve this problem, you can use exit to exit this shell before you close the terminal. Or you can add disown in the starting command.

nohup filebeat -c /data/nlu/filebeat/filebeat.yml -e > /data/nlu/filebeat/filebeat.log 2>&1 & disown

Then you nohup process will not receive the SIGHUP signal after you close terminal.

4 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.