Increasing max_buckets for specific Visualizations

Hi,

Today we ran into an error when viewing a Visualization for a period of 24 hrs.Error Said "Courier fetch: 1 out of 8 shards failed".

On deeper inspection, we found that the query for Visualization gave error for max_buckets.Our setting for max_buckets is 10000 and the query is failing because it needs more buckets.When we increased the bucket size to 20000, the query and Visualization are running fine for a period of 24hrs.

max_buckets is a cluster level setting and we don't want to keep it 20000 for the entire cluster but we want to keep it only for that particular Visualization. Is this possible?

Thanks and Regards,
Nikhil

Unfortunately, the max_buckets setting is only available at the cluster level settings. So what you're asking is not possible. You would have to change the cluster settings to load the visualization.

PUT _cluster/settings
{
  "transient": {
    "search.max_buckets": 20000
  }
}
7 Likes

Hi @nickpeihl,

Thank you for the update!

Could you have any more details on max_bucket settings like, What may be the impact of increasing it? How much I can increase it to without impacting elasticsearch?

We have set it to 10000 to prevent killer queries being executed on elasticsearch but we have not found what can be the optimal value for max_bucket for our cluster.

Also, I would like to know whether there is any auditing which will help to find out whether there was any modification to any Visualization by users.Any view on this would be very helpful

Thanks,
Nikhil

Elasticsearch sets a default limit of 10000 for the search.max_buckets setting. This can be changed, but it can also have a detrimental effect on the cluster, as you say if someone sends "killer queries".

Kibana has a default limit of 2000 for the max buckets in the Advanced Settings under Management. This is a conservative limit and can be set higher. But passing a lot of data around in the browser can be extremely resource intensive and cause browser hangups.

Perhaps we should try to optimize your visualization rather than change cluster settings. Which visualization are you using that needs so many buckets? Does it make sense to visualize that much data at one time? There is a limit to how much detail human eye can perceive.

1 Like

Hi @nickpeihl,

We have stuck to 10000 max_buckets and won't be changing any cluster settings that would affect the cluster.

I had one idea, please let me know would it make any difference?

Visualization is run for the period of 24 hours on one index doing many aggregations. Is it possible, If i reindex the original index(1 shard 1 replica) into a new index(taking only the fields i need for aggregation) and change the shard settings of new index to 1 shared and 3 replica, would it by any chance take less buckets for the period of 24 hours?

ES cluster consists of 3 nodes (master+data).

Thanks

Anything you can do to limit the granularity of the data you are querying may help. Your idea sounds similar to the new rollup indices feature in Elasticsearch. I wonder if that feature would help you?

https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-rollup.html

Hey @nickpeihl,

We did the reindex part that I mentioned earlier but I didn't work since the data returned and the aggregations remain same.

We are also checking the rollup index feature but it seems in v6.3 visualization on rollup index is not there, let me check more and get back

Thank you for your help and suggestions.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.