Courier fetch: 7 of 80 shared failed

Started to get these errors within Kibana.

When I do discover, I see that the last event that made it to ES was on 2016-09-06 11:59:59 so something perhaps went wrong when the new index got created ? We create a new index every day (fluentd plugin does that actually)

Index: fluentd-2016.09.07 Shard: 0 Reason: RemoteTransportException[[Foolkiller][inet[/10.212.32.99:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];
Index: fluentd-2016.09.07 Shard: 1 Reason: RemoteTransportException[[Foolkiller][inet[/10.212.32.99:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];
Index: fluentd-2016.09.07 Shard: 2 Reason: RemoteTransportException[[Lord Pumpkin][inet[/10.212.32.96:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];
Index: fluentd-2016.09.07 Shard: 3 Reason: RemoteTransportException[[Ms. MODOK][inet[/10.212.32.97:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];
Index: fluentd-2016.09.05 Shard: 4 Reason: RemoteTransportException[[Crimson Dynamo][inet[/10.212.32.98:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];
Index: fluentd-2016.09.06 Shard: 4 Reason: RemoteTransportException[[Crimson Dynamo][inet[/10.212.32.98:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];
Index: fluentd-2016.09.07 Shard: 4 Reason: RemoteTransportException[[Ms. MODOK][inet[/10.212.32.97:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [639015321/609.4mb]];

That bit right there is the primary issue, Elasticsearch is preventing the shard from processing the query because there isn't enough memory available to handle the request.