filebeat+logstash,The number of output events per second cannot break the hard limit (CPU and memory are sufficient)

FileBeat configuration :2 core 8G Logstahs configuration :8 core 8G

FileBeat version and Logstash version 7.10.0

Single node FileBeat sends log to Logstash. The number of data per second is fixed and cannot break the 6000 limit. The CPU and memory are sufficient, but it can reach 18000 per second if it outputs to files.

All configurations have been tried following the official instructions

thanks

You'll need to share your logstash.yml configuration; that's where the most relevant configuration goes regarding performance. In particular, pipeline.workers and pipeline.batch.size. I haven't changed the Java configuration options (Eg. heap size) for Logstash at all; so its largely just these two that I have changed.

For the purposes of benchmarking you'll want to avoid writing to elasticsearch; just drop the records so you can see the true limit. My own performance analytics tell me elasticsearch is easily the slowest part of my logstash pipeline (I do throw a lot of data at it).

For a data-point, one a physical server (about five years old), my statistics tell me that my current peak events/second processing is 14.4 keps (but it could go faster; logstash is not generally the bottleneck -- until I started with the memcache plugin today)

PS. I have recently started using logstash_exporter, and I report this to Prometheus. Be sure to give each module invocation a useful 'id' field, so you can track performance of your pipeline. The logstash_exporter just uses the REST API; here's an example (noting that the default port is 9600, not 9601 as in this example. I'm using the 'jq' tool to extract the performance counters for just a single module (which is undergoing performance engineering at present, grrrr.)

# curl -s "http://127.0.0.1:9601/_node/stats/pipelines/main" | jq '.pipelines.main.plugins.filters[] | select(.id == "networking.memcached.32")'
{
  "id": "networking.memcached.32",
  "name": "memcached",
  "events": {
    "in": 12139,
    "duration_in_millis": 1901,
    "out": 12139
  }
}

The Logstash output section has been replaced in the following manner
stdout { codec => dots }

FileBeat profile

Logstash jvm: 4g
Logstash configuration file
image

Each log is about 1000 bytes in size

thanks

solution:
FileBeat single section increases throughput by enabling load balancing and starting multiple Logstash nodes

Refer to the article:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.