Limiting memory usage by Elasticsearch process

I would like to know what configuration changes are needed to force elasticsearch to limit it's memory usage. My ultimate goal is to prevent crashing of elasticsearch process on data nodes when I run high heap consuming queries/aggregations. For now, I am willing to wait but not okay with the process crashing making the node unavailable. Here's what I have already tried -

I have followed this article (https://www.elastic.co/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html) set the following parameters in my elasticsearch.yml in each of my datanodes-

  1. indices.fielddata.cache.size = 75%
  2. indices.breaker.fielddata.limit = 75%

I believed this would solve the problem as I have highly intensive aggregations running on about 210 GB of data by multiple users. I assumed that if I limit the cache size I will stop getting OutOfMemoryException. But this hasn't changed anything. The problem of dangerous usage of 90% of heap still persists. Can anyone please point what I am doing wrong here?

Background - 3.75TB of data spread across 6 nodes and 85 indexes. Frequent high intensive aggregation queries are run on 210 to 770 GB of data. We are working on scaling it. So as part of that we are trying to tweak configuration to lay the ground work and continue to stress test it.

Thank you,
Srikanth K S.