Invalid interval specified, must be non-null and non-empty

Hi there,

I been trying to run the following query on a rollup index, but cant get around the interval error.

This is my query:

GET /rollup-24hours/_search
{
  "size": 0, 
  "aggs": {
    "by_period": {
      "aggs": {
        "sum_value": {
          "sum": {
            "field": "value"
          }
        }
      },
      "date_histogram": {
        "field": "timestamp",
        "interval": "hour",
        "order": {
          "_key": "desc"
        }
      }
    }
  }
}

And the error:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "Invalid interval specified, must be non-null and non-empty"
      }
    ],
    "type" : "search_phase_execution_exception",
    "reason" : "all shards failed",
    "phase" : "query",
    "grouped" : true,
    "failed_shards" : [
      {
        "shard" : 0,
        "index" : "rollup-24hours",
        "node" : "Q7Xq38SpSlaQssaeg7zUNT39w",
        "reason" : {
          "type" : "illegal_argument_exception",
          "reason" : "Invalid interval specified, must be non-null and non-empty"
        }
      }
    ],
    "caused_by" : {
      "type" : "illegal_argument_exception",
      "reason" : "Invalid interval specified, must be non-null and non-empty",
      "caused_by" : {
        "type" : "illegal_argument_exception",
        "reason" : "Invalid interval specified, must be non-null and non-empty"
      }
    }
  },
  "status" : 400
}

The rollup looks like this:

{
    "_id": "rollup-24hours",
    "_seqNo": 22584,
    "_primaryTerm": 2,
    "rollup": {
        "rollup_id": "rollup-24hours",
        "enabled": true,
        "schedule": {
            "interval": {
                "start_time": 1631259744458,
                "period": 1,
                "unit": "Minutes"
            }
        },
        "last_updated_time": 1631259744458,
        "enabled_time": 1631259744458,
        "description": "Roll up for 24 hours",
        "schema_version": 9,
        "source_index": "documents-*",
        "target_index": "rollup-24hours",
        "metadata_id": "JdCpenzBJrVouQoQQ--r",
        "roles": [],
        "page_size": 1000,
        "delay": 0,
        "continuous": true,
        "dimensions": [
            {
                "date_histogram": {
                    "fixed_interval": "24h",
                    "source_field": "timestamp",
                    "target_field": "timestamp",
                    "timezone": "UTC"
                }
            },
            {
                "terms": {
                    "source_field": "company.keyword",
                    "target_field": "company.keyword"
                }
            },
            {
                "terms": {
                    "source_field": "user.keyword",
                    "target_field": "user.keyword"
                }
            },
            {
                "terms": {
                    "source_field": "key.keyword",
                    "target_field": "key.keyword"
                }
            },
            {
                "terms": {
                    "source_field": "product.keyword",
                    "target_field": "product.keyword"
                }
            },
            {
                "terms": {
                    "source_field": "value",
                    "target_field": "value"
                }
            }
        ],
        "metrics": [
            {
                "source_field": "value",
                "metrics": [
                    {
                        "sum": {}
                    }
                ]
            }
        ]
    },
    "metadata": {
        "rollup-24hours": {
            "metadata_id": "JdCpznsBJrVouQoQQ--r",
            "rollup_metadata": {
                "rollup_id": "rollup-24hours",
                "last_updated_time": 1631259807589,
                "continuous": {
                    "next_window_start_time": 1631232000000,
                    "next_window_end_time": 1631318400000
                },
                "status": "started",
                "failure_reason": null,
                "stats": {
                    "pages_processed": 44,
                    "documents_processed": 871,
                    "rollups_indexed": 293,
                    "index_time_in_millis": 497,
                    "search_time_in_millis": 1404
                }
            }
        }
    }
}

Any ideas what wrong here?

Elasticsearch version 7.10.2

Some more additional details.

Here is the stack trace from the logs:

[Invalid interval specified, must be non-null and non-empty]; nested: IllegalArgumentException[Invalid interval specified, must be non-null and non-empty];
	at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:651)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:322)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:603)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:400)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.access$100(AbstractSearchAsyncAction.java:70)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:258)
	at org.elasticsearch.action.search.SearchExecutionStatsCollector.onFailure(SearchExecutionStatsCollector.java:73)
	at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59)
	at org.elasticsearch.action.search.SearchTransportService$ConnectionCountingHandler.handleException(SearchTransportService.java:408)
	at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:640)
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1181)
	at org.elasticsearch.transport.InboundHandler.lambda$handleException$3(InboundHandler.java:277)
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:253)
	at org.elasticsearch.transport.InboundHandler.handleException(InboundHandler.java:275)
	at org.elasticsearch.transport.InboundHandler.handlerResponseError(InboundHandler.java:267)
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:131)
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:89)
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:700)
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142)
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117)
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82)
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1518)
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1267)
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at __PATH__(Thread.java:834)
Caused by: java.lang.IllegalArgumentException: Invalid interval specified, must be non-null and non-empty
	at org.elasticsearch.search.aggregations.bucket.histogram.DateIntervalWrapper.createRounding(DateIntervalWrapper.java:279)
	at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder.innerBuild(DateHistogramAggregationBuilder.java:445)
	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.doBuild(ValuesSourceAggregationBuilder.java:369)
	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.doBuild(ValuesSourceAggregationBuilder.java:43)
	at org.elasticsearch.search.aggregations.AbstractAggregationBuilder.build(AbstractAggregationBuilder.java:139)
	at org.elasticsearch.search.aggregations.AggregatorFactories$Builder.build(AggregatorFactories.java:347)
	at org.elasticsearch.search.SearchService.parseSource(SearchService.java:935)
	at org.elasticsearch.search.SearchService.createContext(SearchService.java:726)
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:428)
	at org.elasticsearch.search.SearchService.access$500(SearchService.java:141)
	at org.elasticsearch.search.SearchService$2.lambda$onResponse$0(SearchService.java:401)
	at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:58)
	at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:752)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at __PATH__(ThreadPoolExecutor.java:1128)
	at __PATH__(ThreadPoolExecutor.java:628)
	... 1 more

Found that the interval field was deprecated in version 7.2 and now the valid options are calendar_interval and fixed_interval. Swapping it for any of these two fixes the query.

From the docs:

Combined interval field is deprecated

[7.2] Deprecated in 7.2. interval field is deprecated Historically both calendar and fixed intervals were configured in a single interval field, which led to confusing semantics. Specifying 1d would be assumed as a calendar-aware time, whereas 2d would be interpreted as fixed time. To get "one day" of fixed time, the user would need to specify the next smaller unit (in this case, 24h ).

This combined behavior was often unknown to users, and even when knowledgeable about the behavior it was difficult to use and confusing.

This behavior has been deprecated in favor of two new, explicit fields: calendar_interval and fixed_interval .

By forcing a choice between calendar and intervals up front, the semantics of the interval are clear to the user immediately and there is no ambiguity. The old interval field will be removed in the future.

Docs page:

What I still can't explain is why this is throwing an error since in theory the field it is just deprecated but not removed (yet). Maybe something related to only roll-up indices?