We are getting too many buckets exception error in viewing the visualization. The default limit for max.buckets is 10000 if we want to increase this parameter to 20000 to resolve the exception so how much would be the extra memory required to accomodate extra buckets ?
We have around 1 lakh of document in a index. There is one key which has distinct values. We are creating the pie chart visualiation in kibana UI. When we give the size of 6000 it works fine . When we give this size to 7000 it throws bucket exception. So if we want to increase the max bucket_size then how much extra memory we required.
Or how to check the memory increase if we increase the max bucket_size ?
Is only heap memory is responsible for carrying out buckets operations(when we increase the limit of max bucket_size) ? Or is there any role of pods resources as well ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.