Single node FileBeat sends log to Logstash. The number of data per second is fixed and cannot break the 6000 limit. The CPU and memory are sufficient, but it can reach 18000 per second if it outputs to files.
All configurations have been tried following the official instructions
You'll need to share your logstash.yml configuration; that's where the most relevant configuration goes regarding performance. In particular, pipeline.workers and pipeline.batch.size. I haven't changed the Java configuration options (Eg. heap size) for Logstash at all; so its largely just these two that I have changed.
For the purposes of benchmarking you'll want to avoid writing to elasticsearch; just drop the records so you can see the true limit. My own performance analytics tell me elasticsearch is easily the slowest part of my logstash pipeline (I do throw a lot of data at it).
For a data-point, one a physical server (about five years old), my statistics tell me that my current peak events/second processing is 14.4 keps (but it could go faster; logstash is not generally the bottleneck -- until I started with the memcache plugin today)
PS. I have recently started using logstash_exporter, and I report this to Prometheus. Be sure to give each module invocation a useful 'id' field, so you can track performance of your pipeline. The logstash_exporter just uses the REST API; here's an example (noting that the default port is 9600, not 9601 as in this example. I'm using the 'jq' tool to extract the performance counters for just a single module (which is undergoing performance engineering at present, grrrr.)