FileBeat client failed to connect

  1. Have you got some log messages from logstash? If logstash figures it's 'overloaded' it will not accept connections for N seconds. Logstash output in beats has default of 30s and it seem connection could not be made within this amount of seconds. With 30 workers in total, a subset of workers might be connected though? Have you checked with netstat how many connections are established?

  2. this filebeat config as is doesn't really make use of load-balancing. No need to have a total of 30 workers pushing to logstash. By default filebeat will push to one worker only and wait for ACK before pushing to another worker. To have load-balancing work properly in filebeat there are to options:

Option 1:
Enable publish_async: true in filebeat section. This option will create batches and publish fully asynchronous. CPU/memory usage will be much increased

Option 2:
Increase spooler_size to be a multiple to bulk_max_size in logstash output. By default both values are 2048. Having spooler_size = 30*bulk_max_size, the batch of lines created by spooler is devided into 30 mini-batches to be forwarded by logstash output with full load-balancing. We still have to wait for all logstash instances to ACK the publish request before sending the next batch. Throughput might be a little less in comparison to publish_async: true, but so might be memory usage.