Logstash dont use all available CPU-Cores

Hey guys,

sorry if this is already handled in another topic here, i did not find any matching.

Last week we installed logstash on a new physical server, that have 28(+28 Multithreading) CPU Threads and 64GB RAM.

First off we run version 1.5.6 of logstash.

We configured to run multiple instances of logstash with different configs, to make the admin life a bit easier :slight_smile:

The config of the problematic instance include tcp inputs as well as grok,prune,drop,mutate,geoip,ruby filters and elasticsearch/zabbix output (it is quite long and complex so i did not post it here, if it is usefull pls tell me)

The elasticsearch output is configured with

workers => 50

and the instance get started with

-w 50

to give it 50 filter workers.

If the loads increase and it reaches 20 CPU Threads (or a usage of 2000%) it stops allocating more resources and runs with the 20 Threads, although there are over 30 idle threads around.

Is there anything i did oversee? Is the one of the filter (maybe ruby) not fully multithreading able?

Any feedback is appreciated.


That's pretty old, you should upgrade as there are a number of performance improvements in 2.X.

well easier said than done, since there are quite a few changes to community plugins that cause me a lot of headache :slight_smile: but i guess it wont help - if this problem still occur in logstash 2.X i am gonna upgrade the thread

today i did upgrade to logstash 2.3.1 and the results was rather catastrophic.

Again the Hardware is the same: 52 CPU Threads, 64GB Ram, SSDs, Debian 8.0 Jessie, however logstash uses 15-20 CPU Threads at its maximum and the events cant be processed and gets into a buffer queue.

I tried different settings regarding the logstash options: 120 workers with 500 batchsize, 52 workers with 1500 batch size, 200 workers with 250 batch size, 150 workers with 1000 batch size - all of them without any big changes to the CPU utilization at all.

I am sharing the config of the given instance here:

the config is quite large, as already said , a tcp input, many grok and mutate filter, some ruby code and ES/Zabbix Output.

I have downgraded to 1.5.6 again to ensure the productivity of our system, however i can upgrade a node anytime to test new settings.

Any feedback is appreciated


Any tipps on this?

Hello german23, I'm encountering the very same problem on a 2.4.1 logstash.
The CPU is stuck at 65% :confused:
My worklow is
a lot of different log --> logstash --> rabbitmq

Did you find a solution to your problem ?

Regards, Guillaume

Hi glmrenard,

I am afraid to tell you i did not yet.

I upgraded from 1.5.6 to 2.1.3 (since this is the last version before the worker thread catastrophe) and running currently on this version.

My plan is to upgrade to 5.X during this year since in 5.X we have to possibility to debug individual filter-output pipelines and see which one is slowing down the instance.

Maybe u want to take a look at it too, until than i can only suggest you to add more instances of logstash and keep the config as little as possible for each one :slight_smile:

1 Like

Thanks for sharing the info, I am also facing the same issue, although I have divided each configuration file to only one filter. My logstash service is utilizing more than 65% of CPU.