Here whenever log matches the above one of the grok then it will create that field and puts in elasticsearch as a separate event. So what we wanted is when the last grok match happened then it should pull fields from previous events and put it as a single event using source as a common filed. How to achieve this?
The above config results in kibana like below, where we don;t have any value for fileds like build_DefinitionName, build_TeamProject etc in the final event,
%{build_DefinitionName} syntax does not work in code. You should use event.get('build_DefinitionName').
map object is not auto-filled. It must be filled for yourself, each time you have a "first" event with map['field'] = event.get['field']
push_previous_map_as_event is especially useful when you have no "tagged" end event and events are not interlaced. It has been thought primarily for jdbc input case. In your case, it seems that you have an tagged end event. And as you speak about log and metrics, you will probably receive interlaced events, I mean first event with source "source1", then event with source "source2", then again event with "source1". push_previous_map_as_event requires events are sorted per task_id.
end_of_task => true is useful to destroy map object (associated to task_id) when you don't need it anymore. It is important so that maps are not stacked in memory forever.
finally, if you want that "first" event fields are merged into "end" event, you must not use push* aggregate options, that always create a fresh new event using map object.
That said, so that "first" event fields are merged into "end" event, I suggest this Logstash aggregate configuration :
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.