Maximising Logstash CPU Utilisation

(Aditya Kumar) #1

Hi

I have installed logstash 6.6.0 on a dedicated Debian 9 machine.

This machine has 2 x Quad Core cpus.

My workflow is I start with blank Elasticsearch indexs (on another machine) and I process logfiles with logstash on the Debian machine.

What I've noticed is that my cpu utilisation is not very high. Since this is a dedicated machine for logstash I want to hammer the cpus so I'm processing these log files as fast as possible.

I've set pipeline.workers to 8 in logstash.yml

Can anyone tell me what else I can do? I want to make sure the machines full potential is being used to process log files.

(Brandon Hatch) #2

Are you using file input in logstash to read a local file? And then send it across the network to a separate ES cluster?
The more filters we have, the more CPU bound we have become. The inputs and outputs seemed to use less CPU comparatively. This means that often times your limit is not CPU, but how quickly you can read the data, and how quickly you can send it off. Often time Logstash is waiting for something upstream or downstream so CPU shows low.
For example, if your ES cluster can only accept 2k events per second, it doesn't matter that your Logstash machine can handle 20k events per second. Same thing if the file input can only read 500 events per second; logstash and ES will appear to be capped in performance. From what I remember, the file input has been known to be more or less single threaded. So if you have 5 different log files then setting up 5 separate file inputs may give you more throughput then doing one file input referencing all 5 files. If you have one big file you may be out of luck though. I don't think it's able to run multiple threads reading one file.

1 Like
(system) closed #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.