We are running several Elasticsearch version 7.x clusters with a total of more than 100 data nodes. Each data node has 32 GB of memory and 8 vCPUs.
I have some questions about the request circuit breaker: Link to documentation
-
What is the best practice for setting the limit? By default, it is set to 60%. Can we increase it to a higher number? The reason is that we are encountering a lot of "New used memory for data of preallocate[aggregations] would be larger than the configured breaker" errors.
-
Quoting from the docs:
The request circuit breaker allows Elasticsearch to prevent per-request data structures (for example, memory used for calculating aggregations during a request) from exceeding a certain amount of memory.
Can you please explain this? Is the limit for a single request (meaning Elasticsearch will calculate and check if a single request exceeds this limit), or is it for concurrent requests (meaning the total limit for concurrent requests at a time)?