How to increase filebeat speed

Dear elastic team,

In my environment i got around 6-7 applications. These applications logs around 30-40 lines per second, it's few GB per day. Filebeat can't keep up with parsing logs to send them to elasticsearch (via Logstash). I tried to increase speed of filebeat by adding additional flags without success.

My filebeat version is:
filebeat version 7.3.1 (amd64), libbeat 7.3.1 [a4be71b90ce3e3b8213b616adfcd9e455513da45 built 2019-08-19 19:30:50 +0000 UTC]

and my config:

filebeat.spool_size: 8192
filebeat.publish_async: true


  • type: log
    enabled: true
    • /opt/*.log
      include_lines: ['ERROR']
      close_removed: true

hosts: ["logstash:5048", "logstash:5054", "logstash:5055"]
bulk_max_size: 8192
loadbalance: true
worker: 3

Is there any chance to increase speed of filebeat parse?


At that throughput level it sounds unlikely that Filebeat is the bottleneck. Filebeat can only send as fast as Logstash and downstream systems can accept. How have you determined that Filebeat is the bottleneck?

Thanks you Christian for reply.

I started filebeat in debug mode with command:

sudo filebeat -c /etc/filebeat/filebeat.yml -d "publish"

In field 'message' i saw that timestamp from my log is much older than real time, with the time the distance between two times were getting bigger. Example log:

11:26:12.590 INFO xxx


What is the scan_frequency param value ?

Also, to verify that Filebeat is the bottleneck, you can also try to send the output of Filebeat to console

enabled: true
pretty: true

make sure to disable other output modes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.