I have been trying to get Logstash to be able to process as many IPFIX (Netflow v10) packets as possible. I have seen some cases where Logstash users easily reached 40k events per second, or even up to 90k:
I have tried many combinations of the settings: flush_size, workers (input workers), queue_size, options in logstash.yml and sysctl.conf parameters. My current Logstash configuration is as following, where I can process approximately 5k events per second.
I run Logstash instances on VMWare ESX machines, as virtual machines with Ubuntu server 16.10, 8 cores each and 8 GB RAM. The Logstash heap size is set to min: 2gb and max: 4gb. The virtual machines are connected by 10GBit/s fiber and VMXNET3.
When I increase the number of flows per second I start to notice packet drops:
Is the Netflow plugin (logstash-codec-netflow) designed for a high number of flows per second? I have to be able to parse at least 40k flow per second, probably a few multiples of that. I have doubts about whether the Netflow codec is the right way to go. If not, I need to find another way!