Data Replication between datacenters


our plan is to store logging information in three datacenters. The data should be high available - also in case of a complete outage of one datacenter.

A tribe node is not an option as in case of a datacenter outage parts of the data won't be available.

Our current architechture is: Logstash (Shipper) -> Redis (Broker - no cluster) -> Logstash (Indexer) -> ES.

Our idea is to copy the data to all datacenters on Broker level. We think about using the cluster functionality of Redis. Maybe Kafka or RabbitMQ are also suitable instead of Redis.

Does anybody have experience with such an issue?



I'd suggest you start with checking for other threads as this has been asked a few times before.
If you have specific questions afterwards then we'll be here to help! :smile:

Hello Mark!

I already checked other threads but they weren't really helpfull. Some use backup & restore which is not a really good solution. I didn't find articles where an already experienced solution is explained. But I will search again. If you know an article it would be nice if you could send me a link.

It's really dependant on your needs.

Sounds like the best option would be to use a broker and the replicate on that level.