Kibana not showing last logstah

as of 00h00m of today, my kibana installation (4.1.2) is not showing today's logstash logs despite showing all logs of previous ones. I have forced a new index for today logstash but its still not working.
From what i have been able to troubleshoot:
1º logstash is sending logs to Elasticsearch cluster (1.6.x)
2º index regarding today logstash has green status
3º documents in today's logstash are in same structure that previous ones

Any suggestions regarding what could be causing this?

Are there any errors in the Elasticsearch logs?

in logs of elasticsearch loadbalancer i only found this

[2015-10-06 13:30:06,719][DEBUG][action.search.type ] [Apache Kid] [logstash-2015.10.06][2], node[2-xd6vilTjCihcIOzX1I6Q], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@a86efa] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [Crimson Craig][inet[/172.17.8.125:9302]][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.ElasticsearchException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]

but this i suppose this error is from kibana notice "Courier Fetch: 5 of 250 shards failed." because some of karaf logs are to big (regarding the limit of that field but i don0t have anything so big as 609.4MB...)

It seems like your query has triggered the circuit breaker, as it would have resulted in too much data being loaded into memory, thereby terminating the query. You may want to increase the heap space available or look at reducing the amount of field data being used, e.g. through the use of doc_values for not analysed fields.

What is the specification of your cluster? How much data and shards do you have in it?

data in cluster

https://paste.fedoraproject.org/275324/44413983/

specification of cluster:

https://paste.fedoraproject.org/275325/39986144/

i would think that if i do a kibana search of 15minutes it would only search in today's logstash index, right?

That depends on how you have defined you index pattern in Kibana. If you have set it up to be date based, e.g. [logstash-]YYYY.MM.DD, Kibana will be able to search only relevant indices. If you however instead have specified a pattern with a wildcard, e.g. logstash *, all matching indices will be queried.

i have changed kibana to be date based (i had logstash*) but the error persists it i only try to search the last 15 minutes

i was able to fetch the query that kibana was doing to ES lb

'{"size":500,"sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"query":{"filtered":{"query":{"query_string":{"analyze_wildcard":true,"query":""}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gte":1443913200000,"lte":1444517999999}}}],"must_not":[]}}}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}},"fragment_size":2147483647},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"3h","pre_zone":"+01:00","pre_zone_adjust_large_interval":true,"min_doc_count":0,"extended_bounds":{"min":1443913200000,"max":1444517999999}}}},"fields":["*","_source"],"script_fields":{},"fielddata_fields":["@timestamp"]}

And this was still giving this is es data logs

[2015-10-07 14:04:31,831][WARN ][indices.breaker ] [Crimson Craig] [FIELDDATA] New used memory 639150905 [609.5mb] from field [@timestamp] would be larger than configured breaker: 639015321 [609.4mb], breaking

even if i reduce this to 1 index like this

curl -X GET -d '{ "size": 5, "sort": [ { "@timestamp": { "order": "desc" } } ] }' '127.0.0.1:9200/logstash-2015.10.07/_search?pretty'

it still returns the same error/warn. Only after i rebooted the data node (and reduced the heap size usage) that the above query start return sucessefully

Is there any way that i can "recicle" what is in HEAP SIZE so that i don't get this error again?

You might find that because of previous queries your cache is still full.... too full for your new query, even when reduced to the 1 index.

Try clearing the cache "curl -XGET http://[eslasticsearch ip]/_cahce/clear" .. or simply restart elasticsearch (if you're in a dev environment)

I've also had the problem that if you leave an auto-refreshing dashboard open eventually the cache will fill up as it queries across new daily indexes. So I have a script that clears the cache each morning.