Filebeat sending data to Logstash seems too slow

which filebeat/logstash versions are you using? Haven't tested with logstash 2.3 yet, but logstash 2.2.1 did improve performance a little.

Any log output from logstash? Elasticsearch + logstash create some back-pressure also affecting filebeat. If elasticsearch can not index fast enough, logstash will be slowed down by elasticsearch. If logstash is slowed down or not fast enough to process data in time, it will block/slow down filebeat (as we don't want to drop any log lines).

Sometimes if logstash is receiving data from too many workers and filter/output pipeline takes too long, logstash might kill the filebeat->logstash connection. This can potentially slow down further processing (check logstash logs). To prevent this from happening (default 5s), set congestion_threshold in logstash beats input plugin to some very high value: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-congestion_threshold.

filebeat config with full load-balancing + 4 workers sending to logstash:

filebeat:
  spool_size : 8192
  publish_async: true
  prospectors:
    -
      paths:
        - D:\LogApplicatives\XXXX.*
      document_type: XXXXX
  
output:
  logstash:
      hosts: ["logstash:5043"]      
      bulk_max_size : 8192
      loadbalance: true
      workers: 4

Please note: full async load balancing increases CPU and memory usage in filebeat and logstash.

2 Likes