When the output target stops for a period of time and then resumes, filebeat is in a blocking state

version: 7.16
os: centos 7.6
install: rpm
Steps to Reproduce:

  1. When the input of filebeat is an http endpoint, and the output is configured as a logstash.
  2. filebeat has a large qps, such as 60/s, you can use a script to concurrently request to achieve
  3. Stop logstash, or make filebeat unable to access logstash by other means.
  4. Wait for a period of time, more than 10s or 1-2 minutes, then restore logstash to normal work
    At this point, you can observe that filebeat cannot receive http requests normally, and you can see that the Error in the log changes from
    connect: connection refused to
    socket: too many open files,
    and execute
    netstat -tnl
    to observe Rec- Q stacking

Maybe the log and Rec-Q are not necessarily reproducible. For example, I am on another machine and the Error log observed is "i/o timeout", but it can be reproduced that filebeat can no longer work normally.
My only recovery at the moment is to restart filebeat

Below is an example of my config file:

logging.metrics.enabled: false
logging.selectors: [processors]
logging.files.name: filebeat.log
logging.files.keepfiles: 15
logging.files.rotateeverybytes: 1073741824

filebeat.inputs:
- type: http_endpoint
  enabled: true
  listen_address: 0.0.0.0
  listen_port: 8443
  include_headers: ["User-Agent","x-forwarded-for","x-remote-IP","x-originating-IP","x-remote-ip","x-remote-addr","x-client-ip","x-client-IP","X-Real-ip","remote_addr","client-ip"]

output.logstash:
hosts: ["10.168.0.30:44050"]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.