We have a requirement to process 20000 events/second. Logstash uses few GROK filters for event processing. Would like to know how many cores and nodes are required.
This will depend a lot on the size and complexity of your data and how well you have optimised your grok filters. If you can share your config I am sure you can get some feedback, but I would otherwise recommend running a test with real data to see what throughput you get.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.