Circuit Breaker Exception

I am working on ES Circuit Breaker implementation. I got circuit breaker exception if heavy terms aggregation. But if multiple es query with less JVM usage than limit is executed, then we got Out Of Memory exception.

Question:

  • Suppose if JVM is 450 and set 40% as limit (i.e. 180 MB), does circuit breaker check if executed query exceed 180MB ?
  • If we run multiples query that require 160MB each. (i.e. executing 5 6 queries parallerly) Do we get still an circuit breaker?

Circuit Breaker Setting in Elasticsearch.yml:
indices.breaker.total.use_real_memory: false
indices.breaker.total.limit: "40%"
indices.breaker.request.limit: "40%"
indices.breaker.request.overhead: 2
indices.breaker.accounting.limit: "40%"
indices.breaker.accounting.overhead: 2
network.breaker.inflight_requests.limit: "40%"
network.breaker.inflight_requests.overhead: 2
indices.breaker.fielddata.limit: "40%"
indices.breaker.fielddata.overhead: 2
indices.breaker.fielddata.type: "memory"

JVM Heap: 450 MB

Hope to see your feedback and suggestion.

Thank you.

I believe I have seen Elasticsearch run with a heap of 512MB but that assumes very light usage as there is a certain amount of static overhead. For any kind of intense usage a would expect a significantly larger heap to be required. Playing with circuit breakers is the wrung thing to do when you have a heP the small IMHO.

I would recommend removing any custom circuit breaker setting and instead increasing the heap to 1GB to see if that helps. Depending on your load, data volume and queries this may need to be increased further.

JVM Heap: 450 MB is just for testing Circuit Breaker. It is not actually implemented for deployed.

Look at how much heap is used just after you have started up the node and it is in green state. That could well be most of it if you have indexed some data.

The default circuit breaker settings have sensible values that generally do not need to be tuned. I do not see the point in tuning them for a very small heap size that will not be used later on.

1 Like

I agree with Christian here.

Apology for low heap size. There is 2 indices having each 950 MB index size and 5M document.

What about using JVM HeapSize: 5GB. And executing heavy es queries having multiple terms aggregation and cardinality. Also, similar queries are executing parallelly.

Can we throw a circuit breaker exception to the overall JVM exceeding limit? When I perform tests, it is not throwing an exception on exceeding overall JVM heap. Instead, when required JVM exceed the limit for a single query, it is throwing an exception.

OR

Could you please provide more information on how a circuit breaker works for a single query and parallel ES queries?

Thank you in advance.

Whit this 5GB heap size, are you using the default settings? What happens during high load? Are you monitoring heap usage? Is there anything in the logs?

JVM increased and shows Out Of Memory in log. Also, there is some others log when JVM is Increasing.

[2022-01-10T10:46:33,878][WARN ][o.e.m.j.JvmGcMonitorService] [Node1] [gc][1756] overhead, spent [881ms] collecting in the last [1s]

Yes, I am monitoring from Kibana Stack Monitoring.

That sounds like a very long collection time for such a small heap. Do you have swap enabled or are you maybe running on a VM with memory ballooning?

Does this happen with default settings? What load is the cluster under? How much total memory does the host have?

Currently, I am testing locally in mac with 16 GB RAM. Yes, swap is enabled. I had tested in AWS EC2 instance as well.

Yes, it happens in default settings as well.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.