I was wondering if there are any good practices around roll-up for data in Elasticsearch coming from Heartbeat or if anyone has experience with it.
We're collecting uptime data from multiple systems and obviously the amount of data can get quite big (with monitors being set to collect data every 15 seconds). However, our needs for data granularity/resolution decreases in time, i.e. for the past 7 days a granularity of 15 seconds is good, but same is not true e.g. for data 1 month back (there e.g. buckets of 5 minute averages would be enough). For data even further back (e.g. 3 months in the past and older) buckets with 60-minute-averages would be enough.
The ideas so far:
- Heartbeat-Index contains contains raw data for the past 2 weeks
- A roll-up job aggregates data into 5-minute-buckets
- ILM takes care of deleting raw data older than 2 weeks
- Another roll-up job aggregates the roll-up index from point 2 above into another index with 60-minute-buckets
- ILM takes care of deleting rolled-up data from point 2
- Are 4. and 5. even possible?
- Does the Uptime app in Kibana support roll-up indices?
- How do you maintain your Heartbeat data? I'm struggling to get a reasonable setup...
P.S. The entire Elastic environment is running on version 7.6.