I forward my logs to redis and I have logstash redis input. Logstash filter use aggregate plugin. I have real events per second rate 20k, but logstash can only 1.5 k events per sec. I use one logstash worker. My server have 16 cpus (Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz), but logstash uses a small amount of resources and have bad perfomance. How can i resolve it?
I try change batch_count and threads, but I have no result.
Surely no one knows the answer to this question?
If you only have a single Logstash worker, processing will be single-threaded, which naturally will limit your throughput. Allowing only a single Logstash worker is a very severe limitation, so you may want to look at achieving the aggregation some other way. What does the data you are aggregating look like? What is it you are looking to achieve?
what I need, I described here Specific GROK filter for multi-line Postgresql log and here https://github.com/logstash-plugins/logstash-filter-aggregate/issues/41
Given the limitations of the aggregate filter plugin, I don't see any easy or neat way to improve performance without modifying the plugin or creating a custom one. Maybe someone else has got a solution or workaround?