Increase UDP input plugin performance

I have spent A LOT of time figuring out how to get the most performance from the UDP input, which is critical for syslog and especially for flow (Netflow, IPFIX, sFlow) use-cases. I am planning a more complete article on this topic, explaining the "why" behind the following. For now, just try this...

  1. Run these commands and add them to a file under /etc/sysctl.d so they are applied at each new start:
sudo sysctl -w net.core.somaxconn=2048
sudo sysctl -w net.core.netdev_max_backlog=2048
sudo sysctl -w net.core.rmem_max=33554432
sudo sysctl -w net.core.rmem_default=262144
sudo sysctl -w net.ipv4.udp_rmem_min=16384
sudo sysctl -w net.ipv4.udp_mem="2097152 4194304 8388608”
  1. Add these options to your UDP input:
workers => 4 (or however many cores/vCPUs you have)
queue_size => 16384
  1. In your logstash.yml (or pipeline.yml if that is what you are using) use these settings:
pipeline.batch.size: 512
pipeline.batch.delay: 250
  1. In the startup.options file change LS_NICE to 0 and re-run system-install:
# Nice level
LS_NICE=0

After making these changes you can (re)start Logstash. You should see a significant boost in throughput.

That all said, what @magnusbaeck says is true. Back pressure can cause the inputs to slow/pause which can cause lost packets once buffers fill. The above changes will help, but if throughput of your pipeline is less than the rate of incoming messages, increases kernel buffers and input queue_size will only delay the inevitable.

Back pressure during event peaks remains one of the biggest reasons to add a message queue (like redis or kafka) to the ingest architecture:

logstash_collect --> redis/kafka --> logstash_process --> elasticsearch

1 Like