I have a use-case where i can have multiple ES clusters running. I want to be able to send logs from multiple sources to 2 ES clusters.
1st gets business critical logs
2nd gets all common logs, VM specific
This way we can reduce the processing and footprint of a lot of ES clusters except for the 1 (which gets all common logs).
From my understanding, there is no way today to send logs to multiple logstash outputs from 1 FB (v1.2.2) instance. One possible way could be to run 2 multiple instances of FB.
Is that the best way to achieve this ? If yes, can someone explain how can i go about it and what are some of the variables that i'd need to change in order to get this to work
I'm open to suggestion and/or better way to achieve the same.
I use to instances of filebeat for this. I run 2 servies, filebeat.service and filebeat2.service. The second one takes its configureation from /etc/filebeat2 instead of /etc/filebeat. Just configure in the service definition that it will read the new config, and it will work just fine. I also use two binaries, renamed the copied one to filebeat2.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.