Moving from single redis instances to clustered caching pool


(Allen Chan) #1

We currently run single instance of redis to cache logs before sending to Elasticsearch. We want to move to a cluster to avoid single point of failure. At this point it feels like we need to move away from Redis due to the following reasons 1) the cluster implementation is very new and probably not mature 2) Logstash plugins for redis are not cluster away from what i see in the github trackers.

So the question is what is the best queue cluster for logstash? There are many articles out there but they dont talk about this use case specifically. I read that kafka can handle a ton of messages per second but it doesnt guarantee the order of messages is maintained (which is very important in logs). I also read that rabbitmq does not handle as many messages etc etc.

Basically the goal is to go from 1 redis server to 4-5 product X servers. Messages would be clustered so losing nodes would not take down the stack. Logstash would be able to resiliently pull from any of the servers.


(Jared Kauppila) #2

How many messages a second are you expecting?

Assuming the events are timestamped, exact order of messages shouldn't be an issue since they'll be indexed based on that value.


(Allen Chan) #3

according to marvel my elasticsearch is getting up to 75000 messages per second.

unfortunately not all logs have a full timestamp. It only has hour,min secs with no date. Without the date it is hard to parse the timestamp. Cannot assume current day either.


(Thomas Widhalm) #4

I would not use Redis clustered but instead use some Redis nodes and have my Logstash redis outputs failover between them.

You can use one redis output in your Logstash configuration and Logstash will pick one and send all event to it. If Redis fails, Logstash will pick another one out of the list and send to this one. Don't configure more redis outputs to avoid duplicate messages, but use one output with multiple Redis hosts in the host directive.

In a normal setup, the messages are read and deleted from Redis before the cluster could even synchronize them to other nodes.


(Allen Chan) #5

I didnt realize this is the behavior of the redis output as the docs didnt explicitly mention that.
The redis input host field is only a string so it can only pull from one redis server. That means i need to double my logstash indexers while half of them are idle?

Unless there are no other viable solution ( from the list of rabbitmq, kafka etc) then i guess this will be the route i have to go to keep HA


(Thomas Widhalm) #6

No, you can add multiple redis inputs to your Indexers. You have one redis output with multiple hosts in the host directive and multiple redis input with one host each. So every Indexer will try to pull from every Redis in parallel. This way you can add and remove Indexers without changing anything in the configuration of your other services.


(system) #7