How many redis input plugins Logstash can Handle?

Hi guys,

Is there a limit of how many redis input plugins can be declared on a single instance of Logstash?

We've decided some time ago to split the redis keys per client/system so that every system had its own queue on Redis and eventual increase on the amount of logs on one system wouldn't affect indexing the logs from other systems.

Ex:

Key = Logstash:client:system

The problem is that now we have 133 systems on the elk stack and by consequence 133 input declarations on every instance of Logstash.

Last month we had about 110 systems and we were indexing about 20000 logs/sec.

Now with 133 systems the indexing rate droped to 200 logs/sec.

I've tested with only one redis input in one of my indexers and the index rate came back to normal values.

We've decided some time ago to split the redis keys per client/system so that every system had its own queue on Redis and eventual increase on the amount of logs on one system wouldn't affect indexing the logs from other systems.

But if the same Logstash instance is processing all messages indexing will be affected if a single host spews large amounts of events into Logstash's input queue. You won't see a complete standstill of indexing for other hosts but you're still competing for the same resources.

We've doscovered that logstash heap wasn't enougth with visualvm and increasing the Heap_size increased the indexing rate to 3000/s, but now we are getting bulking rejects on the elasticsearch.

I think we might simplify our redis keys, specialyze our indexers per group of keys and play with the batch count and flush size values to see with we can get better indexing performance.

What do you recommend @magnusbaeck?

This concept is working fine until now. When we have eventual eventual increase in one system, this backlog is retained on REDIS and the logs from other systems are still delivered near real time.