What are you trying to accomplish? If you want to clone events and send them to the same server, just send them to a different index in the elasticsearch output.
What is not clear from the documents though is whether all filtering now needs to take place within the contents of config/pipelines.yml.
Do we need to move filtering logic away from the .conf file?
DLQ is only supported by the Elasticsearch output plugin as far as I know and only queues documents where Elasticsearch reported an error, not when Elasticsearch was not available.
Given that I had a spare Logstash server doing nothing, I ended up duplicating everything:
Two Filebeat instances residing on the host that produces the logs.
Each Filebeat instance ships logs to a dedicated Logstash node.
Each Logstash node pushes documents to a dedicated Elasticsearch cluster.
Not the most elegant approach, but it was easy to configure without spending much time converting an existing (and huge) Logstash configuration into pipelines.
The regression test would have lasted weeks for no reason.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.