These are not duplicates. There are different places in processing pipeline fields can be added. The first option is per prospector (one can configure multiple prospectors with potentially different fields). The second option is used by publisher pipeline ( I think ) and should add those fields to all events.
Thanks - Based on your above comments if I added tags using the seconds set.... I should see those tags in source system (kafka in this case) . However, I see the reverse. I have to update the tags and fields in the first set to see addition in the output.
However, maybe I am looking for the tags in the wrong place? Maybe the seconds set doesn't impact the actually output data but something else?
Could you provide an example of the configuration where tagging is not working as expected and an output event if possible.
The per prospectors tags should be appended to the global tags and added to each event.
The per prospector fields should be merged with the global fields and added to each event. The per prospector fields will take precedence over the global ones if there are conflicts.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.