Logstash reading rate is max 8-9k per second

I am running a filebeat which is sending logs from my server to REDIS. and the data is huge here. But the logstash is not able to read those data very frequently. And because of this speed, logstash taking lot of time to read whole data.

Issue is : The frequency and speed of data coming to REDIS is very huge compare to speed of Logstash reading the same.

Please suggest some configuration which can help me to increase the logstash reading rate upto 50,000 per second with my filters.

Current stats are
20,000 per sec without filters and output is flat file
8-9k per sec with filters and output is Influxdb
10-12k per sec with filters and output is flat file

My requirement is to have atleast 50,000 per second to influxdb with filters. Please suggest.

For the pipeline try increasing the pipeline.workers and pipeline.batch.size.

For my data sources with higher event rates I used 32 workers and a batch size of 1000.

I also increase the initial (-Xms) and max (-Xmx) heap space in the JVM options to make sure I don't run out of memory. For my systems I increase them both to 30g.

I have a large amount of filters so with this setup each pipeline can sustain about 20k eps. So what I do is actually is send the data to Kafka first in a separate pipeline before the filter pipeline. Then I have 5 separate Logstash systems pull the data from Kafka to process it in parallel. That way I have 5 filter pipelines working on the same data source. This give me over 100k/eps of aggregate filter processing.

Hope this helps.

Thanks a lot Jeremy. I will surely try this.

Have you set 30gb as min and max -Xms for all your 5 logstash? And are they all running on different machines? And what’s your output? Is it influx?

If possible, can you please share your machine configuration which was required to get 100k eps. Like CPU cores, RAM and other configurations please.

Thank you so much again for your kind support.

Hi Ashu-

No problem. My output is Elastic, not InfluxDB. The assumption here is that your bottleneck is the Logstash pipeline with the filters and that InfluxDB is not the bottleneck and can receive 50k eps.

My servers have (2) 18-core processors - with hyperthreading the OS sees 72 cores per system. The systems have 256Gb of RAM. Although they are shared systems running multiple services they have enough resources to run a Logstash instance on them without having a resource contention issue with the other services. I am running an instance of Logstash on all 5 servers and they are all allocated 30Gb of RAM each.

The reason for 30Gb has to do with Java's implementation of object pointers - on 64-bit systems they compress the pointers when the heap is the 32Gb or less. On my systems through testing with overhead and other factors Java was using ordinary object pointers at with a 32Gb heap and didn't switch to compressed pointers until I decreased the heap to 30Gb. Don't go above 32Gb for sure, but also check and see what your specific system is doing.

In your case the first thing I would do is adjust the JVM memory, pipeline workers, and pipeline batch to see how much performance you can get out of a single Logstash instance. I would also carefully review the filters and optimize them if possible - the more regex, the slower the filter.

After you tune the filter pipeline say you're still only able to achieve 10k eps (hopefully you'll get more though) with the InfluxDB. To reach your goal of 50k eps you would need that same filter pipeline running on 5 separate Logstash instances (on seperate systems.)

As mentioned to enable the distributed processing of the data with multiple Logstash instance I use Kafka.

In case you are not familiar Kafka is a distributed messaging queue. You run multiple Kafka brokers across several systems. It works with a publisher/subscriber model where your data is written to a topic. A topic can be partitioned and you can set the replication level of the partition to enhance the distributed performance.

In your case Filebeat would be the producer inputing the raw files and outputting them to a Kafka topic. Logstash would then be a consumer reading from the topic. The trick here is each of your 5 Logstance instances read using the same Kafka Group such that they can synchronize with partitions are being read by which pipeline (or more specifically by which consumer thread in the pipeline.) I follow a one to one rule here. I have 5 servers running Logstash and I have 8 consumer threads per Kafka input - that's a total of 40 threads reading from Kafka. So in Kafka I use 40 partitions per topic - that way each Logstash thread is reading from it's own partition and the best parallelism can be achieved.

Hope this is useful, let me know if I can be of any further help.

Thanks,
Jeremy

Thanks a lot Jeremy.

This process should really help me. Today i am going to try all these.

Basically, i am running three logstash with 10gb heap and 10 threads on three servers. Also, I am using REDIS as a distributed messaging queue. But the issue which i face very frequently is, Logstash stop reading the keys from REDIS itself. But when i restart the logstash service. It start reading for a while and then after sometime it again stop reading the keys.

This, I am considering is because. Logstash is not able to read the keys because of Pipeline workers and pipeline batch configurations are default. I will try to change those.

I am using Filebeat to read the files and send those to REDIS. and from REDIS, All my logstash are reading and processing the data and then sending those to influxdb.

My machines are 32GB RAM and 16 core processor.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.