How to test performance of filebeat

Performance will pretty much depend on your complete chain of processing. Filebeat itself can tail files pretty fast. Especially when content is still in file cache. But once you add logstash, redis, kafka or elasticsearch, performance will highly depend on network and ingest rate of your destination, as filebeat will slow down on back-pressure.

For testing you first need a source. 2 Options:

  1. a prepared few hundred megabytes log files
  2. acustom script writing random log lines with configurable rate simulating a real process (rate can be dynamic for simulating peek times)

Having a prepared log-file gives you an easy start. E.g. NASA HTTP log. You can multiply log files by copying content multiple times in destination log file like $ cat in.log in.log in.log > test.log.

filebeat can export some stats via -httpprof :6060 flag. Use expvar_rates.py script to collect some stats.

See this post for some more tips and how to use expvar_rate.py script.

There is also collectbeat, another beat collecting same information as expvar_rates.py, but forwards to elasticsearch. I use it with master branch, so no idea if it works with 1.x release.

1 Like