We are using file beat version 5.6 to tail logs from all the docker containers of a machine and sending them to logstash.
Here is the config we have for file beat.
'''
filebeat.prospectors:
- input_type: log
paths:- /var/lib/docker/containers//.log*
fields:
env: dev
ignore_older: 2h
document_type: docker
scan_frequency: 10s
close_inactive: 2h
close_removed: true
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
exclude_files: ['.gz$']
- /var/lib/docker/containers//.log*
output.logstash:
hosts: ["logstash:5001"]
timeout: 30s
max_bytes: 104857600
max_message_bytes: 104857600
'''
Earlier we were using swarm as a orchestration tool with that each node was having 40-50 containers, the above configuration was working fine , we didn't see any issues with that.
After we moved to kubernetes the number of containers on the node is doubled. With that the file beat is stop harvesting or sending logs to logstash from few containers. We restart the filebeat on the specific node to send the logs to logstash again.After some time we are seeing the same issue again.
filebeat registry timestamp says it is tailed all the logs.