Logstash memory sizing

I have a setup where I'll be receiving tons of netflow traffic (i.e. we have multiple 10gbit devices that will be forwarding netflow traffic from many tens of thousands of end users). Needless to say this will be brutal on CPU, but what would I need for memory sizing to handle all this? I'd eventually like to scale it out for redundancy behind a load balancer with multiple VMs carrying the load, and apparently memory is at a premium in our datacenter.

I'm estimating we'll probably see 10k messages per second, though we may have more/less. I know the brief test I did with two cores pegged both vCPUs and was dropping packets/messages on the floor. How do I gauge how much memory Logstash needs?

It's unlikely to be a memory issue, LS only holds 40 events in memory.

So as long as I have enough CPU to crunch through it, I could get away with probably 2GB of memory on each Logstash node?

That would be reasonable, unless you are dealing with massive events!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.