Kibana not showing last logstah

as of 00h00m of today, my kibana installation (4.1.2) is not showing today's logstash logs despite showing all logs of previous ones. I have forced a new index for today logstash but its still not working.
From what i have been able to troubleshoot:
1º logstash is sending logs to Elasticsearch cluster (1.6.x)
2º index regarding today logstash has green status
3º documents in today's logstash are in same structure that previous ones

Any suggestions regarding what could be causing this?

Are there any errors in the Elasticsearch logs?

in logs of elasticsearch loadbalancer i only found this

[2015-10-06 13:30:06,719][DEBUG][ ] [Apache Kid] [logstash-2015.10.06][2], node[2-xd6vilTjCihcIOzX1I6Q], [P], s[STARTED]: Failed to execute [] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [Crimson Craig][inet[/]][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.ElasticsearchException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]

but this i suppose this error is from kibana notice "Courier Fetch: 5 of 250 shards failed." because some of karaf logs are to big (regarding the limit of that field but i don0t have anything so big as 609.4MB...)

It seems like your query has triggered the circuit breaker, as it would have resulted in too much data being loaded into memory, thereby terminating the query. You may want to increase the heap space available or look at reducing the amount of field data being used, e.g. through the use of doc_values for not analysed fields.

What is the specification of your cluster? How much data and shards do you have in it?

data in cluster

specification of cluster:

i would think that if i do a kibana search of 15minutes it would only search in today's logstash index, right?

That depends on how you have defined you index pattern in Kibana. If you have set it up to be date based, e.g. [logstash-]YYYY.MM.DD, Kibana will be able to search only relevant indices. If you however instead have specified a pattern with a wildcard, e.g. logstash *, all matching indices will be queried.

i have changed kibana to be date based (i had logstash*) but the error persists it i only try to search the last 15 minutes

i was able to fetch the query that kibana was doing to ES lb


And this was still giving this is es data logs

[2015-10-07 14:04:31,831][WARN ][indices.breaker ] [Crimson Craig] [FIELDDATA] New used memory 639150905 [609.5mb] from field [@timestamp] would be larger than configured breaker: 639015321 [609.4mb], breaking

even if i reduce this to 1 index like this

curl -X GET -d '{ "size": 5, "sort": [ { "@timestamp": { "order": "desc" } } ] }' ''

it still returns the same error/warn. Only after i rebooted the data node (and reduced the heap size usage) that the above query start return sucessefully

Is there any way that i can "recicle" what is in HEAP SIZE so that i don't get this error again?

You might find that because of previous queries your cache is still full.... too full for your new query, even when reduced to the 1 index.

Try clearing the cache "curl -XGET http://[eslasticsearch ip]/_cahce/clear" .. or simply restart elasticsearch (if you're in a dev environment)

I've also had the problem that if you leave an auto-refreshing dashboard open eventually the cache will fill up as it queries across new daily indexes. So I have a script that clears the cache each morning.