Circuit_breaking_exception on read/write

I have persistently been getting this error on my cluster - it does not go away after a restart.

[290986393/277.5mb]","bytes_wanted":299401440,"bytes_limit":290986393}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be larger than limit of [290986393/277.5mb]","bytes_wanted":299401440,"bytes_limit":290986393},"status":503}
from /app/vendor/bundle/ruby/2.3.0/gems/elasticsearch-transport-5.0.0/lib/elasticsearch/transport/transport/base.rb:201:in `__raise_transport_error'

I've not been able to figure out exactly what is causing it, especially as I am at 6.8% storage usage and I have tested with a variety of indexes - from empty, to 200 records, to 50k records.

My setup has a index per user account in my application - https://www.elastic.co/guide/en/elasticsearch/guide/current/user-based.html
I assume that this is causing the issue - that there is a base memory load per index and as soon as you get n indexes per 'x'GB RAM it will have this issue.

I am using Elastic Cloud, so the fix outlined here will not work for me: https://discuss.elastic.co/t/circuit-breaking-exception-on-fielddata/65678/2

Can anyone explain why this is occurring? Is there any way to reduce memory load by unloading the settings of unused indices?

I'm trying to understand why the circuit breaker is kicking in, rather than just throwing more memory at the problem until it goes away - I imagine adding more RAM would fix the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.