Logstash pusher performance: 2.2.4 vs 5.0.2


(Bevan Bennett) #1

We have a fairly active ELK stack set up with the following configuration:

  1. Local pushers running on the application servers with:
  • file input using multiline codec over multiple logfiles
  • minimal filter adding host-specific fields to the message
  • trim filter truncating any message larger than X bytes
  • s3 output
  1. Indexers in AWS doing the heavy grokking with s3 input and es output

Several servers push around 50,000 lines per minute, but our "batch processing servers" can generate up to 250,000 lines per minute.
The trim filter is needed to prevent overly large log lines from causing S3 upload failures.

When we tried to upgrade these pushers to 5.0 (using the same config) throughput on the application servers dropped to around 4k/minute and throughput on the batchservers dropped to around 13k/minute. In both cases around a factor of 20 difference from what we were getting before.

We tried a few general modifications, but were unable to get the performance we need and had to roll back to 2.2.4.
Some things we tried:

Has anyone else had any similar experiences with 5.0 pusher performance, or, even better, managed to get their pushers pushing faster?
Does anyone have any suggestions for other tuning we could try or infrastructure changes we could make?
I'd like to move forward with the new logstash if possible...


(Bevan Bennett) #2

Does anyone at least have some logstash pusher metrics to share?
Are we asking for a reasonable volume or is this something unusual?
If unusual, what are people doing differently to avoid this bottleneck in 5.0?


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.