Courier Fetch: 3 out of 33 shards failed

Hello Everyone!

Once trying to check a specific index via Kibana, I get the following error!

fffffff

The related logs that I see in my elastic search for this Courier Fetch: 3 shard failed are as follows:

2018-03-01T12:54:02,763][DEBUG][o.e.a.s.TransportSearchAction] [ip-10-50-30-150] [2120328] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [ip-10-50-45-225][10.50.45.225:9300][indices:data/read/search[phase/fetch/id]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [request_departure_date] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
	at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:301) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:115) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.index.query.QueryShardContext.getForField(QueryShardContext.java:165) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.search.DefaultSearchContext.getForField(DefaultSearchContext.java:501) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase.hitsExecute(DocValueFieldsFetchSubPhase.java:75) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:170) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:493) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.action.search.SearchTransportService$11.messageReceived(SearchTransportService.java:444) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.action.search.SearchTransportService$11.messageReceived(SearchTransportService.java:441) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1554) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.2.jar:6.1.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) ~[?:?]
	at java.lang.Thread.run(Thread.java:844) [?:?]
[2018-03-01T12:54:02,782][DEBUG][o.e.a.s.TransportSearchAction] [ip-10-50-30-150] [2120322] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [ip-10-50-45-225][10.50.45.225:9300][indices:data/read/search[phase/fetch/id]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [request_departure_date] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.

[2018-03-01T12:54:02,791][DEBUG][o.e.a.s.TransportSearchAction] [ip-10-50-30-150] [1909731] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [ip-10-50-30-72][10.50.30.72:9300][indices:data/read/search[phase/fetch/id]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [request_departure_date] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.

And here is the part of my Elasticsearch config file, that I think is related to this issue:

http.enabled: true
thread_pool:
  index:
    queue_size: 10000

Please let me know how can I get rid of this error?

What's the query you're trying?

@jaisharma I don't use any query, I am just selecting a specific index and it returns me the following.

query

Can you post the output of your get mapping

Are you in the discover page or on a visualisation or a dashboard?
Did you configure you Kibana index pattern to use request_departure_date field?

GET /logstash-demo-all-*/_mapping

          "request_date": {
            "type": "date"
          },
          "request_departure_date": {
            "type": "date"
          },
          "request_departure_station_code": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },

I am on the discover page.

And yes, request_departure_date is defined in the index patterns.

Apparently, one of the shard "thinks" that request_departure_date is a text field.

Can't tell what happened but I think that the workaround would be most likely to reindex your data in another index which is correct.

1 Like

How can that happen ? I've never heard of difference in mapping across shards. Is it because of replication?

It happened in very old versions but AFAIK it should not happen anymore in 6.x (may be 5.x) series anymore. I saw you are using 6.1 so that should not be the case.

Did you have any outage, something strange in logs when the index has been created?

I'm running 2.4.0 and your statement about shards having different copy of mapping scares me.
How do it make sure i don't fall into this trap? Can i do something like querying with preference=primary/replica and check if i get consistent mappings? Shouldn't ES raise an alarm in case mappings are inconsistent across shards?

@jaisharma In the past (can't remember exactly when from the top of my head), primary shards could get in parallel different kind of documents at exactly the same time. And if you are using only dynamic mapping and worse without any index template, then you can end up with the following.

ie. Let's say that primary shard 1 gets a first doc like:

{
 "foo": "bar"
}

The primary shard 2 gets exactly at the same time something like:

{
 "foo": "2017-12-23"
}

Primary one thinks it's a String.
Primary two thinks it's a Date.

And you have kind of inconsistent mapping here.

That happened in the past. This is no longer true with recent versions.
And that can not happen with index templates and explicit mapping even in old versions.

1 Like

Thank you @dadoonet.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.