(Logstash newbie) Logstash 5.0. 150k flows/min limit

Hello. I just started using Logstash 5, but I am stuck, and cannot figure out why.
So if anybody have any idea/suggestions please let me know.

I use Logstash to capture netflow V9 from UDP ports and forward it towards my Elasticsearch cluster.

In my setup I have two identical Logstash nodes running 2.4.0, that each collect around 1.6mil flows/min.
I installed Logstash 5 on one of them, and configured both the config file, jvm.options and logstash.yml, but for some reason hit a 150k flows/min roof.
I can see from htop that Logstash are able to allocate 50g memory, but the CPU cores are mostly idle.

From having the same setting as the other Logstash node, I have been scaling individually on everything I could find, with no luck.

From another post i saw something about number of open files, but since my setup reads from an UDP port, I figured that it might not be it.

I myself think the problem was with the number of workers or number of workers/batch size(and delay), but changes on them have shown no effect.
So I must be missing something.

Best regards Simon.

What's your config look like?
What OS?
What are the nodes specs?

input { udp {
port => xxxx
codec => netflow {
versions => [9]
output { elasticsearch {
hosts => ["x.x.x.x","x.x.x.x" ]
index => "logstash-%{+YYYY.MM.dd.HH}"

logstash.yml: (
node.name: logstash
path.data: /var/lib/logstash
pipeline.workers: 112
pipeline.output.workers: 56
pipeline.batch.size: 4000
pipeline.batch.delay: 1
path.logs: /var/log/logstash


Rest of the config is default Logstash config.

Debian "jessie" 8.6

Logstash node spec
RAM: 74 GB
CPU: 56 cores, 2.4 GHz

The number of worker threads in the UDP input plugin defaults to 2, and since you are using the netflow codec, which does a fair bit of processing, I would recommend trying to increase this. I would probably also reduce the number of pipeline workers and align this with the output workers.

That worked :slight_smile: many thanks. I removed the workers, queue_size and flush_size from the config file at installation due the changing to 5.0, but i should have had let the workers stayed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.