Extra memory required to accomodate extra buckets

We are getting too many buckets exception error in viewing the visualization. The default limit for max.buckets is 10000 if we want to increase this parameter to 20000 to resolve the exception so how much would be the extra memory required to accomodate extra buckets ?

Thank you in advance!

BR
Prashant

That's a little hard to say without more information on your setup.

Hi @warkolm
Thanks for the response

We are using 3master , 3 client and 2 data pod in cluster. Below are the configurations

for client:-

  limits:
    cpu: "1"
    memory: "4Gi"
  requests:
    cpu: "500m"
    memory: "2Gi"
es_java_opts: "-Xms2g -Xmx2g"

for master:-

  limits:
    cpu: "1"
    memory: "2Gi"
  requests:
    cpu: "500m"
    memory: "1Gi"
es_java_opts: "-Xms1g -Xmx1g"

for data:-

  limits:
    cpu: "1"
    memory: "4Gi"
  requests:
    cpu: "500m"
    memory: "2Gi"
es_java_opts: "-Xms2g -Xmx2g"

for kibana:-

  limits:
    cpu: "1000m"
    memory: "2Gi"
  requests:
    cpu: "500m"
    memory: "1Gi" 

We have around 1 lakh of document in a index. There is one key which has distinct values. We are creating the pie chart visualiation in kibana UI. When we give the size of 6000 it works fine . When we give this size to 7000 it throws bucket exception. So if we want to increase the max bucket_size then how much extra memory we required.

Or how to check the memory increase if we increase the max bucket_size ?

Thanks,
Prashant

That's a pretty small cluster. I would start by doubling the heap on your data nodes and then seeing if the same issue occurs.

thanks @warkolm

Is only heap memory is responsible for carrying out buckets operations(when we increase the limit of max bucket_size) ? Or is there any role of pods resources as well ?

Thanks,
Prashant

It's primarily heap.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.