I haven't used the script in a while, but it should still work. Well, as we have many more variables now, the script might break if the terminal is not big enough...
filebeat reports non-zero metrics to it's logs about every 30 seconds. Actually it reports the delta. If you look for acked_events and divide it by 30s you get the rate. Plus logstash also collects internal metrics one can already use with x-pack monitoring.
For testing filebeat->logstash throughput without filters one can use this logstash configuration:
input {
beats { ... }
}
output {
stdout { codec => "dots" }
}
This prints a . per event being processed by logstash. Running logstash with logstash -f test.conf | pv -War >/dev/nullyou can see the current and average event rate in your terminal. As this test removes filters, outputs and any kind of additional source for back-pressure, this gets you a quite good base number on event rates you can actually send to logstash in your environment. This way you can also see how additional filebeat instances might affect overall throughput (which should not scale linearly, due to additional contention).