Data too large[..]

Hi guys.

Yesterday I did a rather heavy search query from Kibana - and then I got the below error, but just on a different field. So I read that I could add this to my conf:
indices.fielddata.cache.size: 5g - did that and restarted ES. My query ran with no problems. This morning I was unable to do any searches, because of the below error. Clearing the cache solved the issue, but that is not a viable solution. I have plenty of RAM on the machine, just need to know how to configure it? Please assist.

[2015-10-14 09:05:17,234][DEBUG][action.search.type       ] [Killer Shrike] [d2-2015.10.14][4], node[m6jOVbkdR6WCtVDuQTUxkw], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7c96b265]
org.elasticsearch.ElasticsearchException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [622775500/593.9mb]
	at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:80)
	at org.elasticsearch.search.aggregations.support.ValuesSource$MetaData.load(ValuesSource.java:88)
	at org.elasticsearch.search.aggregations.support.AggregationContext.numericField(AggregationContext.java:159)
	at org.elasticsearch.search.aggregations.support.AggregationContext.valuesSource(AggregationContext.java:137)
	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.create(ValuesSourceAggregatorFactory.java:53)
	at org.elasticsearch.search.aggregations.AggregatorFactories.createAndRegisterContextAware(AggregatorFactories.java:53)
	at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:157)
	at org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:79)
	at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:100)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.common.util.concurrent.UncheckedExecutionException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [622775500/593.9mb]
	at org.elasticsearch.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
	at org.elasticsearch.common.cache.LocalCache.get(LocalCache.java:3937)
	at org.elasticsearch.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
	at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:167)
	at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:74)
	... 16 more
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [622775500/593.9mb]
	at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.circuitBreak(ChildMemoryCircuitBreaker.java:97)
	at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:148)
	at org.elasticsearch.index.fielddata.RamAccountingTermsEnum.flush(RamAccountingTermsEnum.java:71)
	at org.elasticsearch.index.fielddata.RamAccountingTermsEnum.next(RamAccountingTermsEnum.java:85)
	at org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder$3.next(OrdinalsBuilder.java:472)
	at org.elasticsearch.index.fielddata.plain.PackedArrayIndexFieldData.loadDirect(PackedArrayIndexFieldData.java:109)
	at org.elasticsearch.index.fielddata.plain.PackedArrayIndexFieldData.loadDirect(PackedArrayIndexFieldData.java:49)
	at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$1.call(IndicesFieldDataCache.java:180)
	at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$1.call(IndicesFieldDataCache.java:167)
	at org.elasticsearch.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
	at org.elasticsearch.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
	at org.elasticsearch.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
	at org.elasticsearch.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
	at org.elasticsearch.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
	... 20 more

Think it helped to up the ES_HEAP_SIZE

Yep, more heap will solve this.
Also use doc values where you can.