Hi all,
I'm trying to create a Data Table visualization based on an Index that has around 200,000 rows of data. However, when the data table index tries to display the visualization, it is failing with the following error as no filters are being applied beforehand.
Trying to create too many buckets. Must be less than or equal to: [2000] but was [2001]. This limit can be set by changing the [search.max_buckets] cluster level setting.
Is there a way to get around this limitation? For example, display data in the data table only when filters are applied (or) display the data table within the "max buckets" limit and paginate as required?
The expected result is to either display all 200,000 rows when the dashboard loads split into pages (or) not display anything in the data table and wait for some filter to be applied. Are either of them possible?
I'm an absolute beginner, so trying to understand the various options out there. My version is v 7.10.0.
There have been a lot of improvements in this area in the last 3+ years so upgrading would be great.
But I'm not sure what the problem you have is about to be honest.
The fact that you have 200 000 rows and the error message does not seem to be related.
May be share a screen capture of what you are doing or trying to do would help...
Thanks again, so the following is what I'm trying to do:
I have a Data Table visualisation that is based on an Elastic Search index.
The Data Table visualisation has a metric of Count. There are 19 columns that are required in the data table. Aggregation has been defined for the data table using Terms as aggregation and size as 100.
Running the visualisation with just Count as the metric, shows as value close to 200,000. However, when I add columns (as in, split by rows), it results in an error as below.
too_many_buckets_exception
Trying to create too many buckets. Must be less than or equal to: [2000] but was [2001]. This limit can be set by changing the [search.max_buckets] cluster level setting.
> Error: Service Unavailable
> at Fetch._callee3$ (https://site-url:5601/35949/bundles/core/core.entry.js:6:59535)
> at l (https://site-url:5601/35949/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:380:1740519)
> at Generator._invoke (https://site-url:5601/35949/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:380:1740272)
> at forEach.e.<computed> [as next] (https://site-url:5601/35949/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:380:1740876)
> at fetch_asyncGeneratorStep (https://site-url:5601/35949/bundles/core/core.entry.js:6:52652)
> at _next (https://site-url:5601/35949/bundles/core/core.entry.js:6:52968)
We've tried setting the search.max_buckets parameter to the default value, but the same issue exists - too_many_buckets_exception.
@dadoonet I do understand that upgrading would be great - but unfortunately, in this case Elastic Search and Kibana are tightly bundled along with the application and upgrading it would break a lot of functionality in the application.
Any pointers or inputs would be great! If you had to display close to 200,000 rows of data in a table format, would you still make use of Data Table or would you go for some other visualization?
No. I don't see anything which could help.
You can't really generate more than 2000 buckets.
It would blow up your memory, the network bandwidth and probably will crash your browser.
In this case, I did try creating a Discover table, that partially solves my issue - but, the Column headings are all from the Elastic Search index.
Based on Kibana v 7.10.0, is there a way to override these column headings? I don't think I was able to find anything - can you please advise if I'm missing something?
Your advice and inputs has been phenomenal - thank you!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.