os: centos 7.6
Steps to Reproduce:
- When the input of filebeat is an http endpoint, and the output is configured as a logstash.
- filebeat has a large qps, such as 60/s, you can use a script to concurrently request to achieve
- Stop logstash, or make filebeat unable to access logstash by other means.
- Wait for a period of time, more than 10s or 1-2 minutes, then restore logstash to normal work
At this point, you can observe that filebeat cannot receive http requests normally, and you can see that the Error in the log changes from
connect: connection refusedto
socket: too many open files,
to observe Rec- Q stacking
Maybe the log and Rec-Q are not necessarily reproducible. For example, I am on another machine and the Error log observed is "i/o timeout", but it can be reproduced that filebeat can no longer work normally.
My only recovery at the moment is to restart filebeat
Below is an example of my config file：
logging.metrics.enabled: false logging.selectors: [processors] logging.files.name: filebeat.log logging.files.keepfiles: 15 logging.files.rotateeverybytes: 1073741824 filebeat.inputs: - type: http_endpoint enabled: true listen_address: 0.0.0.0 listen_port: 8443 include_headers: ["User-Agent","x-forwarded-for","x-remote-IP","x-originating-IP","x-remote-ip","x-remote-addr","x-client-ip","x-client-IP","X-Real-ip","remote_addr","client-ip"] output.logstash: hosts: ["10.168.0.30:44050"]