I would like to have my filebeat in my k8s environment to push logs from a set of containers to a different elastic search cluster. So containers C1 and C2 goto es1 and C3 and C4 goto es2. The only way i could think of is to make the containers log to files and build filebeat sidecars into the pods. Is there a way to accomplish this using the filebeat daemonset and capturing stdout/err from containers?
In general it is not possible to ship logs from an only filebeat instance to multiple outputs, you have some options though:
Send all logs to Logstash, and do the output selection there using conditionals. When using multiple outputs with logstash take into account that if one of them is not available, all events processing is blocked, even for other outputs.
Send all logs to Kafka, and have two logstash instances reading and filtering, one for each elasticsearch cluster. This doesn't have the disadvantages of the previous option, but requires more infrastructure (a Kafka cluster, one logstash instance per cluster...).
Deploy multiple filebeats in your nodes, one for each output, all of them with the same configuration except for:
A different registry file, so they can independently keep track of processed logs
A different output configuration
A processor to drop events each output is not interested on
I hope this helps to think on a solution for your case.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.