I am using logstash to read events from elasticsearch and the aggregate filter to condense them.
Because the data I am condensing has no clear start/stop events, we are using the timeout and inactivity_timeout settings of the aggregate filter and pushing the map as an event.
However, we are not getting all of the aggregations. It seems like logstash is exiting as soon as the Elasticsearch input has sent all of the events. I think this is because we use event.cancel() at the end of the aggregation plugin because we do not want the original events. If I use an hour timeout for the hours worth of data I'm reading from elastic, I never get any results. If I reduce the timeout, I can get partial results, but I don't think I am getting all of them.
Is logstash exiting because no more events exist and not allowing the aggregate filter to timeout and push new events? Is there anything I can do about this? Can I tell the aggregate filter to push all the events when the input is complete? Or can I delay the exit until the timeout value?