File beat stop harvesting logs from few files suddenly , works fine after restart

We are using file beat version 5.6 to tail logs from all the docker containers of a machine and sending them to logstash.

Here is the config we have for file beat.

'''
filebeat.prospectors:

  • input_type: log
    paths:
    • /var/lib/docker/containers//.log*
      fields:
      env: dev
      ignore_older: 2h
      document_type: docker
      scan_frequency: 10s
      close_inactive: 2h
      close_removed: true
      json.keys_under_root: true
      json.overwrite_keys: true
      json.add_error_key: true
      exclude_files: ['.gz$']

output.logstash:
hosts: ["logstash:5001"]
timeout: 30s

max_bytes: 104857600
max_message_bytes: 104857600
'''

Earlier we were using swarm as a orchestration tool with that each node was having 40-50 containers, the above configuration was working fine , we didn't see any issues with that.

After we moved to kubernetes the number of containers on the node is doubled. With that the file beat is stop harvesting or sending logs to logstash from few containers. We restart the filebeat on the specific node to send the logs to logstash again.After some time we are seeing the same issue again.

filebeat registry timestamp says it is tailed all the logs.

Do you have any filebeat logs/metrics you can share?

Please format logs and configurations using the </> button or 3 backticks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.