You can get it to work using --pipeline.batch.size 1.
By default, logstash works in batches of 125 events, so 125 events are parsed using grok, then 125 events go through the aggregate that creates the map if it does not exist, then 125 events update the map, then 125 events go through the aggregate that ends the aggregation if it is a TASK_END. So the sql_duration with the value of 118 is added to the first TASK_END, not the second.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.