Data Table - Max Bucket Limit - too_many_buckets_exception

"Trying to create too many buckets. Must be less than or equal to: [65535] but was [65536]. This limit can be set by changing the [search.max_buckets] cluster level setting."

When working with a data table, we run into this error at times. I want to understand how this limit is reached. I know the bucket size limit is reached due to the volume of data, number and size of buckets. But can someone please explain this with a simple example?
I am trying to understand this possibly with a simple formula:
For example: metric size + (split rows * bucket size)... = final size

Let's assume I am using a metric of 'Top Hit'... Aggregate with concatenate and Size of 1.
Please let me know if I can explain this better.

Hi,
Could you share the aggregation query causing the exception?

The exception means literally too many number of buckets. When you are using multiple sub-aggregation, the total number of buckets could be exponentially high. Also I wonder if you are using bucket selector aggregation. As it works AFTER creating buckets, it has no effect to avoid the exception.

I am actually looking for a simple example to understand how the buckets will get generated in a basic scenario (even if the max is not reached). This is just for my understanding (not to solve a particular issue).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.