DateHistogramAggregation with Composite sub-aggregation

Hi,

I got the following exception when trying to execute a DateHistogramAggregation with a sub-aggregation of type CompositeAggregation. Any reason why this wouldn't be supported? Also would this be supported with a regular HistogramAggregation? I am using Elasticsearch version 7.7.0. I was also surprised to not get an exception during client validation phase prior to the query actually being executed.

Thanks,
Philippe

ElasticsearchException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=[composite] aggregation cannot be used with a parent aggregation of type: [DateHistogramAggregatorFactory]]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=[composite] aggregation cannot be used with a parent aggregation of type: [DateHistogramAggregatorFactory]]];
	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496)
	at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:603)
	at org.elasticsearch.action.search.MultiSearchResponse.itemFromXContent(MultiSearchResponse.java:215)
	at org.elasticsearch.action.search.MultiSearchResponse.lambda$static$1(MultiSearchResponse.java:56)
	at org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareObjectArray$7(AbstractObjectParser.java:183)
	at org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareFieldArray$13(AbstractObjectParser.java:211)
	at org.elasticsearch.common.xcontent.AbstractObjectParser.parseArray(AbstractObjectParser.java:229)
	at org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareFieldArray$14(AbstractObjectParser.java:211)
	at org.elasticsearch.common.xcontent.ObjectParser.lambda$declareField$4(ObjectParser.java:283)
	at org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:384)
	at org.elasticsearch.common.xcontent.ObjectParser.parseArray(ObjectParser.java:378)
	at org.elasticsearch.common.xcontent.ObjectParser.parseSub(ObjectParser.java:410)
	at org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:238)
	at org.elasticsearch.common.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:169)
	at org.elasticsearch.common.xcontent.ConstructingObjectParser.apply(ConstructingObjectParser.java:161)
	at org.elasticsearch.action.search.MultiSearchResponse.fromXContext(MultiSearchResponse.java:194)
	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1793)
	at org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAsyncAndParseEntity$10(RestHighLevelClient.java:1581)
	at org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1663)
	at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:590)
	at org.elasticsearch.client.RestClient$1.completed(RestClient.java:333)
	at org.elasticsearch.client.RestClient$1.completed(RestClient.java:327)
	at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
	at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
	at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
	at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
	at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
	at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
	at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
	at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
	at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
	at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
	at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
	at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
	at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
	at java.base/java.lang.Thread.run(Thread.java:834)

Hi,

can you describe your usecase and if possible provide a data example? There is probably an alternative to solve the problem. The purpose of a composite aggregation is to page through a larger dataset. I therefore wonder about using a composite aggregation as sub aggregation.

As for validation: This is by design, the client code only does simple validations but most validations are done server side.

Hendrik,

Thanks for your response. My use case is to compute hourly metrics based on applications state. So each hour I want to know how many instances of a given application was executed broken by state. For instance:

Application A, Version 1.0, State: Successful, 10 instances
Application A, Version 1.0, State: Faulted, 2 Instances
Application B, Version 2.0, State: Successful, 3 instances
Application C, Version 1.0, State: Aborted, 2 Instances

I am guessing the alternative to using a composite aggregation as sub-aggregation to the top Date Histogram Aggregation would be to use several levels of sub term aggregations.

Thanks,
Philippe

The most important usecase for composite aggregations is pagination, this allows you to retrieve all buckets even if you have a lot of buckets and therefore ordinary aggregations run into limits.

A composite aggregation can have several sources, so you can use a date_histogram and e.g. a terms source for the application:

            "composite" : {
                "sources" : [
                    { "date": { "date_histogram" : { "field": "timestamp", "fixed_interval": "1h" } } },
                    { "application": { "terms" : { "field": "app" } } }
                ]
            }

Are you planning to store the results to e.g. further analyze it? Transform is build on top of composite aggs, made for usescases like yours. This saves custom code, is already build for robustness and scale (and there is a nice UI to get you started easily).

Hendrik,

Thanks again. This makes sense. I didn't know I could use a date histogram as one of the sources for a composite aggregation. Also thanks for pointing out the Transform functionality.

Philippe

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.