Risk of updating search.max_buckets

I am using kibana 7.2.0.
What is the effect or risk of doing the following:

` PUT _cluster / settings {"persistent": {"search.max_buckets": 50000}} `

When I looked it up, I found an article with the following expression:

Note that search.max_buckets is a safety soft limit designed to prevent unauthorized aggregation from damaging nodes. Therefore, be careful not to set the limit too high. If the limit trips frequently, it may be necessary to review or create the aggregates small.

Want to know more specifically about safety soft-limit,
Is there a risk of losing logs?
I'm also concerned about CPU performance.
And Is it possible to create a graph for 20,000,000 logs?

There is indeed a risk of crashing or overloading the ES server, in which case it could mean losing data that's being ingested at that time. You can create a graph 20.000.000 logs, even way more, but only if it's aggregated. I would highly recommend against creating 20.000.000 buckets.

Thank you for reply. You are hero.

one more,
What can I do to increase the value of search.max_buckets?
I want an Indicator value. Is there a document?
For example, if the log size is about this,
By increasing the CPU or DISK size, the value of search.max_buckets can be set to XXXX etc...