I am currently testing out the new rollup APIs in Elasticsearch 6.3 and am wondering if there is any way to configure the rollup job to dynamically create an index based on timestamp like Logstash does when ingesting data? The use case is to try and roll up large amounts of time series network performance reporting data and I'm worried that even an hourly rollup will create a huge index to manage so am looking to split it to have one index for each day's hourly rollup.
Current rollup job config:
{
"index_pattern": "dxs-raw-*",
"rollup_index": "dxs-hourly-%{+YYYY.MM.dd}",
"cron": "* */15 * * * ?",
"page_size": 1000,
"groups": {
"date_histogram": {
"field": "@timestamp",
"interval": "1h",
"delay": "12h"
},
"terms": {
"fields": ["ci_id.keyword", "client_id.keyword", "element_name.keyword", "measurement.keyword", "source_management_platform.keyword", "unit.keyword"]
}
},
"metrics": [
{
"field": "value",
"metrics": ["min", "max", "avg"]
}
]
}
Error seen when PUTting job via Kibana DevTools console:
{
"error": {
"root_cause": [
{
"type": "invalid_index_name_exception",
"reason": "Invalid index name [dxs-hourly-%{+YYYY.MM.dd}], must be lowercase",
"index_uuid": "_na_",
"index": "dxs-hourly-%{+YYYY.MM.dd}"
}
],
"type": "runtime_exception",
"reason": "runtime_exception: Could not create index for rollup job [dxs-hourly]",
"caused_by": {
"type": "invalid_index_name_exception",
"reason": "Invalid index name [dxs-hourly-%{+YYYY.MM.dd}], must be lowercase",
"index_uuid": "_na_",
"index": "dxs-hourly-%{+YYYY.MM.dd}"
}
},
"status": 500
}