So, my situation is:
I have a pretty big cluster (10TB) of real time log data, today is a high-load day and I am getting circuit breaker errors
## circuit_breaking_exception at `shard` 0 `index` open--2020.09.30 `node` 50QMPkY_TZa7TA_w3LJ86g
Type
circuit_breaking_exception
Reason
[parent] Data too large, data for [indices:data/read/search[phase/query]] would be [4100722450/3.8gb], which is larger than the limit of [4080218931/3.7gb], real usage: [4100718872/3.8gb], new bytes reserved: [3578/3.4kb], usages [request=0/0b, fielddata=622918802/594mb, in_flight_requests=3578/3.4kb, model_inference=0/0b, accounting=136031956/129.7mb]
Bytes wanted
4100722450
Bytes limit
4080218931
Durability
PERMANENT
I assumed this was coming from the ES node running on the same server as Kibana and only takes care of requests from Kibana which had 4GB RAM, I upped that to 12GB and restarted the node, still get the same error,
I upped the limits of these fields:
indices.breaker.total.limit
indices.breaker.fielddata.limit
And I am still getting the same error, always this 3.7GB limit, can anyone explain where this 3.7GB limit originates ?