I want to use the aggregation feature in logstash to pull data from a table in Oracle which contains time-series data. I will be querying by row id, and the data in the table is aggregated in one minute chunks already - I want to re-aggregate the data into 10 minute chunks.
Looking at the aggregation examples, I think what most closely represents what I want to do is example 4. I don't really have an "end" event, just a time window. So what would I use for a "task_id", would that be a timestamp in this case? I'm assuming from the documentation that "timeout => 600" would mean a 10 minute interval - as in, data from 10:00-10:09 before moving onto building the next set of maps (10:10-10:19)?