Hi,
Have a filebeat 7.9.1 running on several CentOS 7 servers watching 1 log. This log is growing ca 150.000 to 200.000 bytes per second (12 to 15 GB a day at the moment). This log is copytruncated.
This issue is that at somepoint filebeat nolonger seems to send data. At first I thought logstash would be the issue, but the strange thing is that after a logrotate (we rotate the file when 4GB in size) the datadelivery is restored.
My fileeba.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /product/AGL/agl-core/logs/agl.log
exclude_files: ['\.gz$']
multiline.pattern: '^ts:'
multiline.negate: true
multiline.match: after
tags: ["avs6", "api-log", "apigateway", "rt"]
ignore_older: 48h
close_inactive: 5m
close_removed: true
clean_removed: true
scan_frequency: 10s
harvester_limit: 4444
filebeat.config.modules:
enabled: false
# path: ${path.config}/modules.d/*.yml
# reload.enabled: false
processors:
- drop_fields:
fields: ["host"]
fields:
environment: production
# environment: docker
queue.mem:
events: 4096
#flush.min_events: 1024
#flush.timeout: 10s
output.logstash:
enabled: true
hosts: ["papps1479.ora.prd.itv.local:5044"]
#hosts: ["papps1479.ora.prd.itv.local:5044","papps1480.ora.prd.itv.local:5044","papps1481.ora.prd.itv.local:5044"]
loadbalance: true
timeout: 1m
#bulk_max_size: 2048
slow_start: true
logging:
level: info
metrics:
enabled: false
What might cause this isse?