Aggregate filter timeouts and input completion

I am using logstash to read events from elasticsearch and the aggregate filter to condense them.

Because the data I am condensing has no clear start/stop events, we are using the timeout and inactivity_timeout settings of the aggregate filter and pushing the map as an event.

However, we are not getting all of the aggregations. It seems like logstash is exiting as soon as the Elasticsearch input has sent all of the events. I think this is because we use event.cancel() at the end of the aggregation plugin because we do not want the original events. If I use an hour timeout for the hours worth of data I'm reading from elastic, I never get any results. If I reduce the timeout, I can get partial results, but I don't think I am getting all of them.

Is logstash exiting because no more events exist and not allowing the aggregate filter to timeout and push new events? Is there anything I can do about this? Can I tell the aggregate filter to push all the events when the input is complete? Or can I delay the exit until the timeout value?

Thanks.

The filter will flush any events still in the map when logstash shuts down.

I want to be sure we are talking about the same thing.

I believe the filter has processed all the events from the input. Now we have only maps that have not had a timeout or end event. Are you saying that upon logstash exit, the aggregate filter is supposed to push all the maps as new events, just as if it had reached the timeout value?

If I am understanding you, then I do not believe this is happening. When I have a high timeout value, and am only using the stdout output plugin, I get NO output. I see no maps that were pushed as events. I only see output at all when I lower the timeout, and I do not believe it has pushed all the maps even then.

My bad. The final flush (when logstash exits) is called but does not do anything when the push_map_as_event_on_timeout is used. It only flushes events when push_previous_map_as_event is used.

Periodic flushes are expected to work for both.