I would ingest the entire file as a single event and then process it two different ways. With any recent release of logstash you would do that using multiple pipelines and a forked-path pattern. However, I will use the old school method.
The clone filter creates one or more clones of an event. In this example it creates one clone and sets the type to "section2".
If you used
clone { clones => [ "section1", "section2" ] }
you would get 3 events, and the clone filter would set [type] to "section1" on one of them, and "section2" on the other. The clone filter would not set [type] on the third one.
Thank you so much Badger. So you apply filters regarding the clone labeled events and route the matches of each labeled events through the corresponding output index.
This line it's not mapping the date into @timestamp
grok { match => { "message" => "^DATE: %{DATA:[@metadata][date]} }
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.