I would like to configure my filebeats to deliver each record to two different hosts. In other words, a have multiple clusters and I would like to have log data delivered to both. What is the appropriate approach?
The use case is that I am going from 2.x to 5.x by standing up a new cluster in parallel and I want to test the new cluster before cutting over.
Beats on purpose does not support event duplication, as this would create some indirect coupling between the 2 systems (back-pressure from system 1 will affect system 2). Using beats->kafka can be used to decouple the target systems, as you can use distinct consumer groups to read the events from kafka into the target system. Don't have LS output the events to both hosts (will couple the target systems again), but one LS instance/pipeline per target host. E.g. having one system for production and a second for development, you don't want the dev environment (potentially being down) affect the production environment.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.