New used memory [6.4gb] for data of [<reused_arrays>] would be larger than configured breaker

Elasticsearch version 2.3.1.

For searches which include heavy aggregation over long period of time (1 year data in this case), i start getting :-

WARN request:143 - [request] New used memory 6915236168 [6.4gb] for data of [reused_arrays] would be larger than configured breaker: 6871947673 [6.3gb], breaking

I believe this is the limit imposed by :-

indices.breaker.request.limit

And it doesn't seem to be dynamically updateable. I got an OOM Error because of it, despite the breaker limit set.
Is there a way to clear the memory dynamically? I clear the cache using _cache/clear if _cat/fielddata goes 5+ GB using a curl request running periodically, is there something similar I can do to prevent this one as well?

Can you share an example of the "heavy aggregation"? There might be things which can be done to reduce the cost of the request e.g. using breadth_first settings.

https://www.elastic.co/guide/en/elasticsearch/guide/2.x/_preventing_combinatorial_explosions.html

Yeah, I am working on improving those for better alternatives.

My concern was that despite the breaker limit set, an OOM happens .. that's the exact purpose of having the breaker..