Circuit breaker problem

So, my situation is:
I have a pretty big cluster (10TB) of real time log data, today is a high-load day and I am getting circuit breaker errors

## circuit_breaking_exception at  `shard`  0 `index`  open--2020.09.30 `node`  50QMPkY_TZa7TA_w3LJ86g

Type

circuit_breaking_exception

Reason

[parent] Data too large, data for [indices:data/read/search[phase/query]] would be [4100722450/3.8gb], which is larger than the limit of [4080218931/3.7gb], real usage: [4100718872/3.8gb], new bytes reserved: [3578/3.4kb], usages [request=0/0b, fielddata=622918802/594mb, in_flight_requests=3578/3.4kb, model_inference=0/0b, accounting=136031956/129.7mb]

Bytes wanted

4100722450

Bytes limit

4080218931

Durability

PERMANENT

I assumed this was coming from the ES node running on the same server as Kibana and only takes care of requests from Kibana which had 4GB RAM, I upped that to 12GB and restarted the node, still get the same error,

I upped the limits of these fields:
indices.breaker.total.limit
indices.breaker.fielddata.limit

And I am still getting the same error, always this 3.7GB limit, can anyone explain where this 3.7GB limit originates ?

3.7GB is the heap size you allocated to Elasticsearch. From what I read, the memory is held up by other requests causing not enough memory to serve this particular request even though this request itself is small (only 3.4kb). Since you have enough physical memory that can be allocated to ES, you can increase the heap size of ES by JVM options: -Xmx and -Xms.
Not sure if it will complete solve your problem, but it is worth a try.

I have done that, the ES nodes handling these have a lot more memory then 3.7G, they have 12G each

I don't understand how I managed to overlook that I have a node ID on the node giving me problems there, I of course just had to cross-reference the node ID with the node name and restart it and problem solved

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.