Filebeat publish but nothing appears

Hi all,

I've strange behavior with some filebeat instances on a kubernetes cluster.
I send logs to logsatash container from 4 nodes with 4 filebeat instances deployed as daemonset.

For time to time, I have no more logs into my ES cluster from 1 instance. (others are OK)
When I take a look into filebeat debug logs, I see that filebeat is still publishing logs, but they don't end up in logstash instance.

When I go into container, I can telnet logstash instance normally.

Below is kind of logs I can see from filebeat instance :

2020-03-05T13:53:37.652Z DEBUG [harvester] log/log.go:107 End of file reached: /var/log/containers/wormhole-9k85b_kube-system_wormhole-88c36033dafa93d35ec9cbca23388616df0f57abd7fc76aa9cb1a8c7c2b72ed2.log; Backoff now.
2020-03-05T13:53:37.935Z DEBUG [harvester] log/log.go:107 End of file reached: /var/log/containers/celery-ool-pricing-68fd4d7989-xgnkz_default_celery-priceminister-pricing-bfc91e01ba7d539dbb21e66a1649c50a94d70ff2a5775632925a3e0b4d27b34b.log; Backoff now.
2020-03-05T13:53:37.961Z DEBUG [logstash] logstash/async.go:159 41 events out of 41 events sent to logstash host logstash.logging:5046. Continue sending
2020-03-05T13:53:37.965Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [34402: 0, 41]
2020-03-05T13:53:37.965Z DEBUG [publisher] memqueue/eventloop.go:535 broker ACK events: count=7, start-seq=67533, end-seq=67539

2020-03-05T13:53:37.965Z DEBUG [publisher] memqueue/eventloop.go:535 broker ACK events: count=34, start-seq=2083694, end-seq=2083727

2020-03-05T13:53:37.965Z DEBUG [publisher] memqueue/ackloop.go:128 ackloop: return ack to broker loop:41
2020-03-05T13:53:37.965Z DEBUG [publisher] memqueue/ackloop.go:131 ackloop: done send ack
2020-03-05T13:53:37.965Z DEBUG [acker] beater/acker.go:64 stateful ack {"count": 41}
2020-03-05T13:53:37.966Z DEBUG [registrar] registrar/registrar.go:356 Processing 41 events
2020-03-05T13:53:37.966Z DEBUG [registrar] registrar/registrar.go:326 Registrar state updates processed. Count: 41
2020-03-05T13:53:37.966Z DEBUG [registrar] registrar/registrar.go:411 Write registry file: /usr/share/filebeat/data/registry/filebeat/data.json (413)
2020-03-05T13:53:37.968Z DEBUG [processors] processing/processors.go:186 Publish event: {
"@timestamp": "2020-03-05T13:53:36.755Z",
"@metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.6.0"

...

2020-03-05T14:11:01.828Z DEBUG [input] log/input.go:511 Update existing file for harvesting: /var/log/containers/celery-oiu-pricing-697564bf87-d6z7v_default_celery-oiu-pricing-1f3d883daab3b75c65bfc4d8323c8689a0b23d1b96cb5aafaef71a7b02c2cfe1.log, offset: 66596
2020-03-05T14:11:01.828Z DEBUG [input] log/input.go:565 File didn't change: /var/log/containers/celery-oiu-pricing-697564bf87-d6z7v_default_celery-backmarket-pricing-1f3d883daab3b75c65bfc4d8323c8689a0b23d1b96cb5aafaef71a7b02c2cfe1.log
2020-03-05T14:11:01.828Z DEBUG [input] log/input.go:421 Check file for harvesting: /var/log/containers/celery-otto--5f6c6fd465-t2nx9_default_celery-otto--13e45dc6cad5675a4609a8e7f27156736b53d49b6ecad794a08f68b5c9409651.log
2020-03-05T14:11:01.828Z DEBUG [input] log/input.go:511 Update existing file for harvesting: /var/log/containers/celery-otto--5f6c6fd465-t2nx9_default_celery-otto--13e45dc6cad5675a4609a8e7f27156736b53d49b6ecad794a08f68b5c9409651.log, offset: 13174
2020-03-05T14:11:01.828Z DEBUG [input] log/input.go:565 File didn't change: /var/log/containers/celery-otto-de-provisioning-5f6c6fd465-t2nx9_default_celery-otto--13e45dc6cad5675a4609a8e7f27156736b53d49b6ecad794a08f68b5c9409651.log
2020-03-05T14:11:01.828Z DEBUG [input] log/input.go:212 input states cleaned up. Before: 48, After: 48, Pending: 0
2020-03-05T14:11:02.332Z DEBUG [logstash] logstash/async.go:159 164 events out of 164 events sent to logstash host logstash.logging:5046. Continue sending

Below is filebeat config :

[root@v2 filebeat]# cat /etc/filebeat.yml
filebeat.inputs:

  • type: container
    exclude_files: ['filebeat','celery','beat-manager']
    paths:
    • /var/log/containers/*.log
      processors:
    • add_kubernetes_metadata:
      host: ${NODE_NAME}
      matchers:
      • logs_path:
        logs_path: "/var/log/containers/"
  • type: container
    paths:
    • /var/log/containers/celery-*.log
    • /var/log/containers/beat-manager-*.log
      multiline.pattern: '^['
      multiline.negate: true
      multiline.match: after
      processors:
    • add_kubernetes_metadata:
      host: ${NODE_NAME}
      matchers:
      • logs_path:
        logs_path: "/var/log/containers/"

processors:

  • add_cloud_metadata:
  • add_host_metadata:

logging.metrics.enabled: false
logging.level: debug

logstash
output.logstash:
enabled: true
hosts: ["logstash.logging:5046"]

If I restart container it goes back to normal.

Any idea about what's happened ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.