Increase UDP input plugin performance

I need to increase UDP input plugin performance only for port 514.
I have a machine with 16 CPU and 32 GB RAM
I use logstash 2.3.3 version with "-b 500 -w 2" options

input {
    udp {
        port => 514
        codec => plain { charset => "ISO-8859-1"}
        workers => 3
   }
   udp {
        port => 513
        tags => ["513"]
        }
}

I changed workers from 2 to 3 for udp but seems performance not increase yet.
What can I do?

What does the rest of the pipeline configuration look like? If filters and outputs can't keep up, backpressure will cause the udp inputs to drop packets.

I have spent A LOT of time figuring out how to get the most performance from the UDP input, which is critical for syslog and especially for flow (Netflow, IPFIX, sFlow) use-cases. I am planning a more complete article on this topic, explaining the "why" behind the following. For now, just try this...

  1. Run these commands and add them to a file under /etc/sysctl.d so they are applied at each new start:
sudo sysctl -w net.core.somaxconn=2048
sudo sysctl -w net.core.netdev_max_backlog=2048
sudo sysctl -w net.core.rmem_max=33554432
sudo sysctl -w net.core.rmem_default=262144
sudo sysctl -w net.ipv4.udp_rmem_min=16384
sudo sysctl -w net.ipv4.udp_mem="2097152 4194304 8388608”
  1. Add these options to your UDP input:
workers => 4 (or however many cores/vCPUs you have)
queue_size => 16384
  1. In your logstash.yml (or pipeline.yml if that is what you are using) use these settings:
pipeline.batch.size: 512
pipeline.batch.delay: 250
  1. In the startup.options file change LS_NICE to 0 and re-run system-install:
# Nice level
LS_NICE=0

After making these changes you can (re)start Logstash. You should see a significant boost in throughput.

That all said, what @magnusbaeck says is true. Back pressure can cause the inputs to slow/pause which can cause lost packets once buffers fill. The above changes will help, but if throughput of your pipeline is less than the rate of incoming messages, increases kernel buffers and input queue_size will only delay the inevitable.

Back pressure during event peaks remains one of the biggest reasons to add a message queue (like redis or kafka) to the ingest architecture:

logstash_collect --> redis/kafka --> logstash_process --> elasticsearch

1 Like

This is the other part. @rcowart I will try your tips. Many thanks to both.

filter {
            translate {
                            field => "message"
                            override => true
                            exact => true
                            regex => true
                            dictionary_path => "/var/drop_events/drop_events.yaml"
                            destination => "drop_events"
                            fallback => "0"
                    }

            if  [drop_events] == 1 {
                            drop {}
                    }
                    mutate {
                            remove_field => ["logstash_drop_event"]
                    }

    }
    output {
                  stdout { codec => rubydebug}

            if "514" in [tags]  {
                                            file {
                                                    path => "/var/log/%{tags}.log"
                                           }
            }else {
                        if "513" in [tags] {
                                                        file {
                                                                path => "/var/log/%{tags}.log"
                                                        }
                         }else {
                                    file {
                                             path => "/var/log/LTMS/others.log"
                                     }
                          }
             }
    }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.