Filebeat stop harvesting new logs

Hi all, I used filebeat to collect docker logs in k8s. After all environment prepared, filebeat send some logs to logstash, but a short time later it stopped sending any logs to logstash. Can someone help me with this problem?

Filebeat is deploy as a DaemonSet in k8s, it's output is a k8s service, which will send data to the pod where logstash is
Filebeat(pod) -> k8s service -> logstash(pod)

Filebeat Pod Env: request 200Mi Memory, limit 2048Mi Memory

Filebeat config is:
filebeat:
inputs:

  • paths:
    • /data/docker/containers//.log
      type: container
      output:
      logstash:
      hosts:
    • 172.18.71.24:5044
      path:
      config: /home/work/filebeat
      data: /home/work/filebeat/data
      home: /home/work/filebeat
      logs: /home/work/filebeat/logs

The log file get new content every seconds: /data/docker/containers/eaf63cc827f6c64b6f627358db3754d571a015207d08fa428a133709cb7953f0/eaf63cc827f6c64b6f627358db3754d571a015207d08fa428a133709cb7953f0-json.log

But I can't find this file path at /home/work/filebeat/data/registry/filebeat/data.json

Filebeat was start with command: cd /home/work/filebeat && ./filebeat -e -d "*" -c ./filebeat.yml

And I found some logs below:

%%%
At first it send logs to logstash
%%%
2019-11-18T11:29:57.955Z DEBUG [processors] processing/processors.go:183 Publish event: {
"@timestamp": "2019-11-18T10:18:43.216Z",
"@metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.4.1"
},
"ecs": {
"version": "1.1.0"
},
"host": {
"name": "filebeat-4l94p"
},
"agent": {
"hostname": "filebeat-4l94p",
"id": "74afff35-6fa9-4dea-b890-0b0a7a7f498c",
"version": "7.4.1",
"type": "filebeat",
"ephemeral_id": "71460aea-a2d6-41b1-bcb1-62cb2c73d025"
},
"log": {
"offset": 49832,
"file": {
"path": "/data/docker/containers/eaf63cc827f6c64b6f627358db3754d571a015207d08fa428a133709cb7953f0/eaf63cc827f6c64b6f627358db3754d571a015207d08fa428a133709cb7953f0-json.log"
}
},
"stream": "stdout",
"message": "4986260166516802014",
"input": {
"type": "container"
}
}
%%%

########
later I got some other messages
########

2019-11-18T11:30:07.438Z DEBUG [input] log/input.go:421 Check file for harvesting: /data/docker/containers/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d-json.log
2019-11-18T11:30:07.438Z DEBUG [input] log/input.go:494 Start harvester for new file: /data/docker/containers/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d-json.log
2019-11-18T11:30:07.439Z DEBUG [harvester] log/harvester.go:494 Setting offset for file based on seek: /data/docker/containers/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d-json.log
2019-11-18T11:30:07.439Z DEBUG [harvester] log/harvester.go:480 Setting offset for file: /data/docker/containers/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d/e8623ad7887c49fbb9f2839b62bdef8d4e2d531472a795e02497b8759f10b22d-json.log. Offset: 0
2019-11-18T11:30:07.439Z DEBUG [harvester] log/harvester.go:182 Harvester setup successful. Line terminator: 1
2019-11-18T11:30:12.423Z DEBUG [harvester] log/log.go:107 End of file reached: /data/docker/containers/e962f88dec35d135487ba1c503d0d230df882fbf14f826b30849935c4900e29e/e962f88dec35d135487ba1c503d0d230df882fbf14f826b30849935c4900e29e-json.log; Backoff now.
2019-11-18T11:30:12.423Z DEBUG [harvester] log/log.go:107 End of file reached: /data/docker/containers/bcc965db711efc0a009dbfd9a02c0701c6cfeaef796fe971b2dcb43be827f802/bcc965db711efc0a009dbfd9a02c0701c6cfeaef796fe971b2dcb43be827f802-json.log; Backoff now.

@@@@
finally all logs like this
@@@@

2019-11-18T12:42:27.420Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":720},"total":{"ticks":2650,"time":{"ms":24},"value":2650},"user":{"ticks":1930,"time":{"ms":24}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":13},"info":{"ephemeral_id":"71460aea-a2d6-41b1-bcb1-62cb2c73d025","uptime":{"ms":4350040}},"memstats":{"gc_next":20619120,"memory_alloc":10474976,"memory_total":96499944},"runtime":{"goroutines":64}},"filebeat":{"harvester":{"open_files":7,"running":24}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":32}},"system":{"load":{"1":0.09,"15":0.08,"5":0.07,"norm":{"1":0.015,"15":0.0133,"5":0.0117}}}}}}

@@@@

Wish some one can help me to resolve this problem, thank you so much~

add some logstash logs (/home/work/logstash/logs/logstash-plain.log), it seems pod 2 got some logs from filebeat, and both of these pod are running without any problem
pod1:

[2019-11-18T12:53:31,892][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-11-18T12:53:31,963][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.4.1"}
[2019-11-18T12:53:35,244][INFO ][org.reflections.Reflections] Reflections took 81 ms to scan 1 urls, producing 20 keys and 40 values
[2019-11-18T12:53:36,190][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-11-18T12:53:36,196][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, :thread=>"#<Thread:0x1b704979 run>"}
[2019-11-18T12:53:36,856][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-11-18T12:53:36,871][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2019-11-18T12:53:37,038][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
[2019-11-18T12:53:37,045][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-11-18T12:53:37,566][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

pod2:
[2019-11-18T12:52:09,209][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-11-18T12:52:09,243][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.4.1"}
[2019-11-18T12:52:14,245][INFO ][org.reflections.Reflections] Reflections took 104 ms to scan 1 urls, producing 20 keys and 40 values
[2019-11-18T12:52:15,932][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-11-18T12:52:16,015][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, :thread=>"#<Thread:0x444d655c run>"}
[2019-11-18T12:52:17,302][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-11-18T12:52:17,320][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2019-11-18T12:52:17,633][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
[2019-11-18T12:52:17,634][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-11-18T12:52:18,613][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-11-18T12:52:58,604][INFO ][logstash.outputs.pipe ][main] Opening pipe {:command=>"/home/work/bmlApps/bin/logUploader -config-file /home/work/bmlApps/conf/logUploader.conf -fake-upload"}
[2019-11-18T12:52:59,301][INFO ][logstash.outputs.pipe ][main] Starting stale pipes cleanup cycle {:pipes=>{"/home/work/bmlApps/bin/logUploader -config-file /home/work/bmlApps/conf/logUploader.conf -fake-upload"=>#<PipeWrapper:0x790ebf76 @pipe=#<IO:fd 182>, @active=true>}}

I changed logstash output from pipe (to a self develop program) to file, it's works well
I think it's my program caused this problem, please ignore this issue
thank you again for your attention to this problem