Can anyone suggest a way to read entries from Redis faster? I seem to be inserting them faster than I can read them as I have 8 Logstash indexers running and either they can't read fast enough or there is a bottleneck that I am not aware somewhere else in Logstash that is slowing down the ingestion rate. Any help would be appreciated.
Have you benchmarked your downstream system(s), e.g. Elasticsearch, to verify that that is not what is limiting throughput? If so, what does your configuration look like?
I have not... can you recommend how to do that?
The easiest way might be to generate a number of large files containing data in the format you receive from Redis and then use the file input to load this into Elasticsearch as quickly as possible while monitoring Elasticsearch. Use the same number of Logstash processes and make sure to set filter workers to an appropriate value if you have not already done this. The file input is quite efficient and if this gives you the same indexing throughout as with Redis, it is likely that the bottleneck may not be Redis at all.