Filebeat Process not clearing after STDIN input stops

I am running mesos external logger. In that I am creating dynamic filebeat processes per container. This process will forward logs to Graylog. The container when it is terminated, the STDIN content will stop flowing. The expectation is Filebeat process to be terminated when the end of file occurs. But that is not happening. I am end up killing the process manually. Since these containers are dynamically spawned, its hard to find the obsolete process and active process to cleanup. I tried -once flag with shutdown_timeout and close_eof - true options. Below is my filebeat.yml file.

filebeat:
  shutdown_timeout: 0s
  prospectors:
    -
      paths:
        - "-"
      input_type: stdin
      close_eof: true
      fields:
        mesos_log_stream: STDOUT
        mesos_log_sandbox_directory: /u01/local/file
        
        

output:
  logstash:
    hosts: ["x.x.x.x:5044"]

Command -
filebeat -c filebeat.yml -once

I remember we are having issues that Filebeat was shutting down too early: https://github.com/elastic/beats/issues/3315 So in your case FB does not shut down at all? Can you share the filebeat logs, best on debug level?

nohup: ignoring input and redirecting stderr to stdout
2018/01/17 10:45:26.395772 beat.go:436: INFO Home path: [/u01/filebeat-6.1.1-linux-x86_64] Config path: [/u01/mesos/filebeat-6.1.1-linux-x86_64] Data path: [/u01/mesos/filebeat-6.1.1-linux-x86_64/data] Logs path: [/u01/mesos/filebeat-6.1.1-linux-x86_64/logs]
2018/01/17 10:45:26.395825 metrics.go:23: INFO Metrics logging every 30s
2018/01/17 10:45:26.395828 beat.go:463: DBG [beat] Beat metadata path: /u01/mesos/filebeat-6.1.1-linux-x86_64/data/meta.json
2018/01/17 10:45:26.395937 beat.go:443: INFO Beat UUID: 547d256a-e1fd-490c-af2e-f28753783984
2018/01/17 10:45:26.395955 beat.go:203: INFO Setup Beat: filebeat; Version: 6.1.1
2018/01/17 10:45:26.395965 beat.go:215: DBG [beat] Initializing output plugins
2018/01/17 10:45:26.395977 processor.go:49: DBG [processors] Processors:
2018/01/17 10:45:26.396383 logger.go:18: DBG [publish] start pipeline event consumer
2018/01/17 10:45:26.396405 module.go:76: INFO Beat name: service1..com
2018/01/17 10:45:26.396933 beat.go:276: INFO filebeat start running.
2018/01/17 10:45:26.396991 registrar.go:88: INFO Registry file set to: /u01/mesos/filebeat-6.1.1-linux-x86_64/data/registry
2018/01/17 10:45:26.397028 registrar.go:108: INFO Loading registrar data from /u01/mesos/filebeat-6.1.1-linux-x86_64/data/registry
2018/01/17 10:45:26.397063 registrar.go:119: INFO States Loaded from registrar: 0
2018/01/17 10:45:26.397095 filebeat.go:261: WARN Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018/01/17 10:45:26.397122 crawler.go:48: INFO Loading Prospectors: 1
2018/01/17 10:45:26.397183 registrar.go:150: INFO Starting Registrar
2018/01/17 10:45:26.397185 cfgwarn.go:18: WARN DEPRECATED: input_type prospector config is deprecated. Use type instead. Will be removed in version: 6.0.0
2018/01/17 10:45:26.397364 processor.go:49: DBG [processors] Processors:
2018/01/17 10:45:26.397770 prospector.go:87: INFO Starting prospector of type: stdin; ID: 3845813333099825968
2018/01/17 10:45:26.397899 prospector.go:138: INFO Stopping Prospector: 3845813333099825968
2018/01/17 10:45:26.397935 crawler.go:82: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018/01/17 10:45:26.397967 harvester.go:215: INFO Harvester started for file: -
2018/01/17 10:45:26.397981 filebeat.go:273: INFO Running filebeat once. Waiting for completion ...
2018/01/17 10:45:26.398157 filebeat.go:275: INFO All data collection completed. Shutting down.
2018/01/17 10:45:26.398184 crawler.go:109: INFO Stopping Crawler
2018/01/17 10:45:26.398202 crawler.go:119: INFO Stopping 1 prospectors
2018/01/17 10:45:26.398240 crawler.go:135: INFO Crawler stopped
2018/01/17 10:45:26.398298 processor.go:275: DBG [publish] Publish event: {
  "@timestamp": "2018-01-17T10:45:26.398Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "source": "",
  "offset": 6,
  "message": "hello",
  "beat": {
    "name": "service1.com",
    "hostname": "service1.com",
    "version": "6.1.1"
  }
}
2018/01/17 10:45:26.398398 log.go:99: DBG [harvester] Source is not continuable: /dev/stdin
2018/01/17 10:45:26.398414 harvester.go:238: INFO End of file reached: . Closing because close_eof is enabled.
2018/01/17 10:45:26.398428 harvester.go:467: DBG [harvester] Stopping harvester for file:
2018/01/17 10:45:26.398446 harvester.go:477: DBG [harvester] Closing file:
2018/01/17 10:45:26.398456 harvester.go:488: DBG [harvester] harvester cleanup finished for file:
2018/01/17 10:45:26.398480 logger.go:18: DBG [publish] client: closing acker
2018/01/17 10:45:26.398490 logger.go:18: DBG [publish] client: done closing acker
2018/01/17 10:45:26.398500 logger.go:29: DBG [publish] client: cancelled 0 events
2018/01/17 10:45:27.399815 async.go:94: DBG [logstash] connect
2018/01/17 10:45:27.433348 async.go:142: DBG [logstash] 1 events out of 1 events sent to logstash host graylogserver.com:5044. Continue sending
2018/01/17 10:45:27.434221 logger.go:29: DBG [memqueue] ackloop: receive ack [0: 0, 1]
2018/01/17 10:45:27.434249 logger.go:29: DBG [memqueue] broker ACK events: count=1, start-seq=1, end-seq=1
2018/01/17 10:45:27.434276 logger.go:18: DBG [memqueue] ackloop: return ack to broker loop:1
2018/01/17 10:45:27.434289 logger.go:18: DBG [memqueue] ackloop:  done send ack
2018/01/17 10:45:56.396425 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30001 beat.memstats.gc_next=4194304 beat.memstats.memory_alloc=2845656 beat.memstats.memory_total=4369760 filebeat.events.active=1 filebeat.events.added=1 filebeat.harvester.closed=1 filebeat.harvester.open_files=-1 filebeat.harvester.running=0 filebeat.harvester.started=1 libbeat.config.module.running=0 libbeat.output.events.acked=1 libbeat.output.events.batches=1 libbeat.output.events.total=1 libbeat.output.read.bytes=6 libbeat.output.type=logstash libbeat.output.write.bytes=192 libbeat.pipeline.clients=0 libbeat.pipeline.events.active=0 libbeat.pipeline.events.published=1 libbeat.pipeline.events.retry=1 libbeat.pipeline.events.total=1 libbeat.pipeline.queue.acked=1 registrar.states.current=0

Command -
nohup echo "hello" | /u01/filebeat-6.1.1-linux-x86_64/filebeat -c filebeat.yml -e -d "*" -once &

After the execution, the beat process is still running in the backend. The expectation is that beat process should terminate after the stdin reached end of file.

    u01]$ ps aux | grep beat
    service+   5255  0.0  0.0 257544 10536 pts/3    Sl   04:45   0:00 /u01/filebeat-6.1.1-linux-x86_64/filebeat -c filebeat.yml -e -d * -once

My filebeat.yml -

            filebeat:
              shutdown_timeout: 0s
              prospectors:
                -
                  paths:
                    - "-"
                  input_type: stdin
                  close_eof: true
            output:
              logstash:
                hosts: ["graylogserver:5044"]

Thanks for the logs. The order things happen do not seem to be correct. Does this always happen or only from time to time?

@steffens I wonder if the above could also be related to the pipeline refactoring?

@Vignesh_Kumar_A Could you open an issue for this on Github?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.