Assuming you have a reasonably modern version of logstash, I would run one pipeline that did no filtering (or just filtering needed for both outputs) then unconditionally write to two outputs, each of which would be consumed by another pipeline that performed the output specific filtering.
You might want to look at pipeline-to-pipeline links for this, or you can use tcp or http to communicate between pipelines.
Interesting...checked out the link and it appears this is the newer (better) way to handle the forked path pattern. They even say it used to be done with Ifs and Clone which is where I was going.
Thanks much...will definitely be checking this out...seems like the way to go.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.