Hello,
I have set up a process using Filebeat to send a high traffic Nginx accesslog to Logstash. However, Filebeat intermittently hangs, with a consistent pattern of increasing memory cache.
Both Filebeat and Nginx are configured in individual container environments within the same pod in a Kubernetes (k8s) setup, utilizing the accesslog path volume mounted.
Accesslog files are rotated every 4 hours. The rotation method is simple: rename, then gzip compression. It goes like this:
- application.log -> application.{date}.log -> application.{date}.log.gz
- and then we start writing a new application.log.
I've observed a peculiar pattern. The hanging Filebeat resumes operation through the logrotate process. When the log file rotates, Filebeat starts operating again (with memory cache usage decreasing), and logs are sent until, after a certain period of time, it hangs again. I speculate that this could be a problem with the file descriptor usage of the harvester.
In addition, Filebeat does not hang during early morning hours when traffic is low. During the daytime when traffic is high, the log file size increases to about 5GB every 4 hours.
- Filebeat version: 7.12.1
- Filebeat configuration:
queue.mem:
events: 40960
flush.min_events: 20480
filebeat.inputs:
- type: log
fields:
_@type: nginx-log
instance_name: myinstance
fields_under_root: true
exclude_files: ['\.gz$']
paths:
- mylogpath/application.log*
output:
logstash:
hosts: ["mylogstashinfo"]
loadbalance: true
logging:
level: warning
to_files: true
to_syslog: false
files:
path: /myfilebeatpath/logs
name: filebeat-plain.log
keepfiles: 10
Any insights or possible solutions for this hanging issue would be greatly appreciated.
Thank you!



