Request circuit breaker

We are running several Elasticsearch version 7.x clusters with a total of more than 100 data nodes. Each data node has 32 GB of memory and 8 vCPUs.

I have some questions about the request circuit breaker: Link to documentation

  • What is the best practice for setting the limit? By default, it is set to 60%. Can we increase it to a higher number? The reason is that we are encountering a lot of "New used memory for data of preallocate[aggregations] would be larger than the configured breaker" errors.

  • Quoting from the docs:

The request circuit breaker allows Elasticsearch to prevent per-request data structures (for example, memory used for calculating aggregations during a request) from exceeding a certain amount of memory.

Can you please explain this? Is the limit for a single request (meaning Elasticsearch will calculate and check if a single request exceeds this limit), or is it for concurrent requests (meaning the total limit for concurrent requests at a time)?

Hi @bugb,

Welcome back! You can tweak the circuit breaker settings and increase the threshold. Generally the defaults are pretty sensible, and increasing the limits means you're running a higher risk of the node crashing with an out of memory error. I would recommend looking at the requests your are sending to figure out why you are triggering the circuit breaker.

My understanding of reading that excerpt is that the limit is for a single request, taking into account the overhead multiplier. I'm sure someone will chime in if this is not correct. There is a bit of detail in this post that might also help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.