Elasticsearch version 2.3.1.
For searches which include heavy aggregation over long period of time (1 year data in this case), i start getting :-
WARN request:143 - [request] New used memory 6915236168 [6.4gb] for data of [reused_arrays] would be larger than configured breaker: 6871947673 [6.3gb], breaking
I believe this is the limit imposed by :-
indices.breaker.request.limit
And it doesn't seem to be dynamically updateable. I got an OOM Error because of it, despite the breaker limit set.
Is there a way to clear the memory dynamically? I clear the cache using _cache/clear if _cat/fielddata goes 5+ GB using a curl request running periodically, is there something similar I can do to prevent this one as well?