the settings from second redis section will overwrite the first one, in yaml parser. Unfortunately first section as is will never be presented to filebeat at loading time (config processing would warn and quit here).
What exactly you want replication for?
Replication is not supported on purpose, as users normally ask for replication in order to support sending events to multiple environments. e.g. production and development environments. See this github discussion why this might be a bad idea.
My goal was to duplicate the events from filebeat , store it to multiple redis and eventually store the events to different clusters of elastisearch.
But I think below explanation clarifies the concept of shipping events from beats.
Problem is, when having multiple logstash outputs in beats (doing event routing essentially), these logstash instances implicitly get coupled via beats. If one instance is down or unresponsive, the others won't get any data. A message queue like kafka will help to uncouple these systems as long as kafka is operating.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.