I am trying to use an aggregate filter to pull fields from several different but related logs into one "summary" log. There is a clear end_of_task event and I would like to "push"/add all of the fields in the map into the final end_of_task event.
Do I have to use a "event.set" command for each of the possible fields in the map? event.set('field1', map['field1'])
I would prefer NOT to use this route as the number and names of the fields in the map can vary.
OR
Can I iterate through all the fields in the map in some way?
OR
Can I just push the whole map into the event as an array with event.set?
Something like this event.set('session_summary', map)?
Your comment about "push_previous_map_as event" produced another question, as this is my first time using the aggregate filter.
Due to the nature of my logs, it is very likely that I will have multiple maps being handled at the same time. My understanding of the "push_previous_map_as_event" is that it would push the previous map (and all associated data/fields) out of memory as soon as it saw a log with a different "task_id".
Is this correct?
And/or am I wrong in my understanding that this filter can handle multiple simultaneous maps?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.