I've set up ELK on AWS in a fault-tolerant configuration (multi-AZ), and
have been looking at integrating redis into the stack to ease the load on
logstash, as is commonly recommended. However, it seems to me that this
just introduces a single point of failure into an otherwise redundant
setup. While I gather that redis can be clustered, I have yet to find any
documentation or how-tos that focus on using clustered redis as part of a
fault-tolerant ELK setup.
I have my logstash instances load balanced and could theoretically scale
out that tier if those instances were to become overloaded. Would anyone
recommend this as a suitable alternative to having a single redis node?
Another option is to put a redis instance on each logstash server, that way you just point logstash at 127.0.0.1:6379. However, if any logstash server completely dies then you'll also lose the events in the redis queue on that server, so this option would probably be my second choice.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.