i was able to fetch the query that kibana was doing to ES lb
'{"size":500,"sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"query":{"filtered":{"query":{"query_string":{"analyze_wildcard":true,"query":""}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gte":1443913200000,"lte":1444517999999}}}],"must_not":[]}}}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}},"fragment_size":2147483647},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"3h","pre_zone":"+01:00","pre_zone_adjust_large_interval":true,"min_doc_count":0,"extended_bounds":{"min":1443913200000,"max":1444517999999}}}},"fields":["*","_source"],"script_fields":{},"fielddata_fields":["@timestamp"]}
And this was still giving this is es data logs
[2015-10-07 14:04:31,831][WARN ][indices.breaker ] [Crimson Craig] [FIELDDATA] New used memory 639150905 [609.5mb] from field [@timestamp] would be larger than configured breaker: 639015321 [609.4mb], breaking
even if i reduce this to 1 index like this
curl -X GET -d '{ "size": 5, "sort": [ { "@timestamp": { "order": "desc" } } ] }' '127.0.0.1:9200/logstash-2015.10.07/_search?pretty'
it still returns the same error/warn. Only after i rebooted the data node (and reduced the heap size usage) that the above query start return sucessefully
Is there any way that i can "recicle" what is in HEAP SIZE so that i don't get this error again?