Risk of updating search.max_buckets

I am using kibana 7.2.0.
What is the effect or risk of doing the following:

` PUT _cluster / settings {"persistent": {"search.max_buckets": 50000}} `

When I looked it up, I found an article with the following expression:

Note that search.max_buckets is a safety soft limit designed to prevent unauthorized aggregation from damaging nodes. Therefore, be careful not to set the limit too high. If the limit trips frequently, it may be necessary to review or create the aggregates small.

Want to know more specifically about safety soft-limit,
Is there a risk of losing logs?
I'm also concerned about CPU performance.
And Is it possible to create a graph for 20,000,000 logs?

There is indeed a risk of crashing or overloading the ES server, in which case it could mean losing data that's being ingested at that time. You can create a graph 20.000.000 logs, even way more, but only if it's aggregated. I would highly recommend against creating 20.000.000 buckets.

Thank you for reply. You are hero.

one more,
What can I do to increase the value of search.max_buckets?
I want an Indicator value. Is there a document?
For example, if the log size is about this,
By increasing the CPU or DISK size, the value of search.max_buckets can be set to XXXX etc...

I would say increasing the JVM size is the way to go to handle more. Also CPU for making the requests take less time. This is off the top of my head. For more detailed tunes, the Elasticsearch team will have more info than me. Just post a question in their part of the forum and they'll get back to you.

By experince i would never suggest change search.max_buckets, i suggest to keep the default setup to 10K buckets and use partitions

To ylasri

Thank you for reply.
Why do you recommend it?
What exactly happened?

To Matius

Thak you for reply!!
Where is the "forum" ?

This one: https://discuss.elastic.co/c/elasticsearch

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.