Search phase execution exception with reason all shards failed

Hi!

I am trying to add a sub-bucket in an kibana histogram and i get the bellow:

"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[

{"shard":0,"index":"data-20200123",
"node":"CXFldNupTpObai1jt88wDg","reason":{"type":"too_many_buckets_exception","reason":"Trying to create too many buckets. Must be less than or equal to:
[10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.","max_buckets":10000}},

{"shard":0,"index":"data-20200127",
"node":"CXFldNupTpObai1jt88wDg","reason":{"type":"too_many_buckets_exception","reason":"Trying to create too many buckets.
Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.","max_buckets":10000}},

{"shard":0,"index":"data-a-20200123","node":"CXFldNupTpObai1jt88wDg","reason":{"type":"too_many_buckets_exception","reason":"Trying to create too many buckets.
Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.","max_buckets":10000}},

{"shard":0,"index":"data-20200127","node":"rF8XLfPSSkSUBUdIgr9Lzw","reason":{"type":"too_many_buckets_exception","reason":"Trying to create too many buckets.
Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.","max_buckets":10000}}
]},

"status":503}

"search_phase_execution_exception","reason":"all shards failed"

I guess that during the sub-bucket aggregation I get more than 10000 buckets from elasticsearch as answer.
How can I get over this problem?
Could be a solution to the proble to increase bucket size, or to increase number of buckets or to reduce size of shards. Is that feasible for elasticsearch or kibana?

Thank you in advance!

It can be done, not really recommended as if you don't have enough resources for the ES cluster, there could be problems.
This limit can be set by changing the [search.max_buckets] cluster level setting.","max_buckets":10000

This is the API to change the settings: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html
You can also do it in the elasticsearch.yml file.

Thank you Marius for your answer.

Since I have only 4 indices with total 200 docs and the distinguished values for the sub-bucket aggregation are 30 how it exceeds the max_bucket (10000)?

Does the number of buckets has to do with all the sub-bucket aggregations cumulatively?

It depends on the number of buckets in the timestamp as well. It woud be 30x nr of time buckets. And if you used a small interval for the bucket, across a large time period, it can easily go over 10000.

Thank you Marius!

Is there any "formula" to calculate the buckets knowing the number of distinguished values of the sub-bucket aggregations (3 sub-buckets (terms) in a kibana histogram), the interval and the time period?
For example:
subbucket A : 30 distinguished values
subbucket B: 5 distinguished values
subbucket C: 10 distinguished values
and all this requested 80 times in a time period.
A multiplication of all these? (I guess that this way I will get the max possible but not the real number)

Or I have to count all the combinations from these 3 subbuckets I get for each moment in the time period and add all these numbers?
For example:
9:00 : results=8 (1 doc with aa-bb-cc / 4 docs with aa -bb -cb / 3 docs with ab -bb -cc)
9:15 : results=9 (1 doc with aa-bb-cc / 4 docs with aa -bb -cb / 2 docs with ab -bb - cc / 2 docs with ab-bb-ca)
9:30 : results=3 (1 doc with ad-bb-ca / 1 doc with aa -bb -cb / 1 docs with ab -bf -ca)

(Where e.g1 doc with ad-bb-ca means that the query for subbucket A has as result the value ad
the query for subbucket B has as result the value bb
and the query for subbucket C has as result the value ca
)

So in this way the buckets for time period [9:00-9:30] will be 8+9+3 ?
:confused:

Thank you again!!

I really don't know how elasticsearch calculates it, but I do guess it's the first one with the max possible as it doesn't know hot many combinations there are until it starts calculating them.

Ok :wink:
Thanks!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.