Kibana Dashboard Error – Set fielddata=true on fieldname

I am getting performance data using metricbeat and I am trying to produce Dashboards in kibana.

Im using ELK stack version 5.3 and once i imported the metricbeat dashboards, im getting below error.

9Courier Fetch: 10 of 15 shards failed.

please help.

Hi there,

Wow, this is odd. Can you help me get more information on this problem?

  1. What happens if you click on a visualization? Does it load? If it gives an error, what is the error?
  2. Could you inspect your network response and let me know what it contains? The courier error in your screenshot originates in Elasticsearch, so maybe we can get more information by checking out the raw network response.

Thanks,
CJ

Hello,

Yes, this error seems to be unique for me which im facing for the first time when i installed ELk stack 5.3 version.

And to your question,

  1. Yes, when i click on any visualization, it will load but since then i get an error message as 2
    Courier Fetch: 10 of 20 shards failed. on top of the header in Kibana only when i click on metricbeat dashboard.

  2. Please find the elasticsearch logs whereas at point of the error below.

     [2017-04-20T12:20:06,991][DEBUG][o.e.a.s.TransportSearchAction] [bngwidap108.aonnet.aon.net] [metricbeat-2017.04.17][4], node[TbwfU-3ZTaK9ygrYgC4vDg], [P], s[STARTED], a[id=v3E9F0SnTRWXiAPJi9G2BA]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[metricbeat-2017.04.17, metricbeat-2017.04.19, metricbeat-2017.04.18, metricbeat-2017.04.20], indicesOptions=IndicesOptions[id=39, ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='1492669522819', requestCache=null, scroll=null, source={
       "size" : 0,
       "query" : {
         "bool" : {
           "must" : [
             {
               "query_string" : {
                 "query" : "metricset.module: system AND metricset.name: memory",
                 "fields" : [ ],
                 "use_dis_max" : true,
                 "tie_breaker" : 0.0,
                 "default_operator" : "or",
                 "auto_generate_phrase_queries" : false,
                 "max_determinized_states" : 10000,
                 "enable_position_increments" : true,
                 "fuzziness" : "AUTO",
                 "fuzzy_prefix_length" : 0,
                 "fuzzy_max_expansions" : 50,
                 "phrase_slop" : 0,
                 "analyze_wildcard" : true,
                 "escape" : false,
                 "split_on_whitespace" : true,
                 "boost" : 1.0
               }
             },
             {
               "query_string" : {
                 "query" : "*",
                 "fields" : [ ],
                 "use_dis_max" : true,
                 "tie_breaker" : 0.0,
                 "default_operator" : "or",
                 "auto_generate_phrase_queries" : false,
                 "max_determinized_states" : 10000,
                 "enable_position_increments" : true,
                 "fuzziness" : "AUTO",
                 "fuzzy_prefix_length" : 0,
                 "fuzzy_max_expansions" : 50,
                 "phrase_slop" : 0,
                 "analyze_wildcard" : true,
                 "escape" : false,
                 "split_on_whitespace" : true,
                 "boost" : 1.0
               }
             },
             {
               "range" : {
                 "@timestamp" : {
                   "from" : 1490079006841,
                   "to" : 1492671006841,
                   "include_lower" : true,
                   "include_upper" : true,
                   "format" : "epoch_millis",
                   "boost" : 1.0
                 }
               }
             }
           ],
           "disable_coord" : false,
           "adjust_pure_negative" : true,
           "boost" : 1.0
         }
       },
       "_source" : {
         "includes" : [ ],
         "excludes" : [ ]
       },
       "aggregations" : {
         "5" : {
           "terms" : {
             "field" : "beat.name",
             "size" : 50,
             "min_doc_count" : 1,
             "shard_min_doc_count" : 0,
             "show_term_doc_count_error" : false,
             "order" : [
               {
                 "1" : "desc"
               },
               {
                 "_term" : "asc"
               }
             ]
           },
           "aggregations" : {
             "1" : {
               "avg" : {
                 "field" : "system.memory.total"
               }
             },
             "2" : {
               "avg" : {
                 "field" : "system.memory.actual.used.bytes"
               }
             },
             "3" : {
               "avg" : {
                 "field" : "system.memory.swap.used.pct"
               }
             },
             "4" : {
               "avg" : {
                 "field" : "system.memory.actual.free"
               }
             },
             "6" : {
               "avg" : {
                 "field" : "system.memory.swap.used.bytes"
               }
             },
             "7" : {
               "avg" : {
                 "field" : "system.memory.actual.used.pct"
               }
             }
           }
         }
       },
       "highlight" : {
         "pre_tags" : [
           "@kibana-highlighted-field@"
         ],
         "post_tags" : [
           "@/kibana-highlighted-field@"
         ],
         "fragment_size" : 2147483647,
         "require_field_match" : false,
         "fields" : {
           "*" : { }
         }
       }
     }}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [bngwidap108.aonnet.aon.net][10.209.68.107:9300][indices:data/read/search[phase/query]]
    Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [beat.name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory.
    	at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:336) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:111) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.index.query.QueryShardContext.getForField(QueryShardContext.java:166) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.aggregations.support.ValuesSourceConfig.resolve(ValuesSourceConfig.java:97) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.resolveConfig(ValuesSourceAggregationBuilder.java:297) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.doBuild(ValuesSourceAggregationBuilder.java:290) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.doBuild(ValuesSourceAggregationBuilder.java:39) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.aggregations.AbstractAggregationBuilder.build(AbstractAggregationBuilder.java:126) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.aggregations.AggregatorFactories$Builder.build(AggregatorFactories.java:333) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.SearchService.parseSource(SearchService.java:637) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.SearchService.createContext(SearchService.java:468) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:444) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:331) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:328) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) [elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:613) [elasticsearch-5.3.0.jar:5.3.0]
    	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.0.jar:5.3.0]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]
    	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]

Also please find the network responses below in the server.

h:\>ping 10.209.68.107 -n 5

Pinging 10.209.68.107 with 32 bytes of data:
Reply from 10.209.68.107: bytes=32 time<1ms TTL=128
Reply from 10.209.68.107: bytes=32 time<1ms TTL=128
Reply from 10.209.68.107: bytes=32 time<1ms TTL=128
Reply from 10.209.68.107: bytes=32 time<1ms TTL=128
Reply from 10.209.68.107: bytes=32 time<1ms TTL=128

Ping statistics for 10.209.68.107:
    Packets: Sent = 20, Received = 20, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 0ms, Average = 0ms

h:\>tracert 10.209.68.107

Tracing route to BNGWIDAP108.aonnet.aon.net [10.209.68.107]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  BNGWIDAP108.aonnet.aon.net [10.209.68.107]

Trace complete.

Interesting. I've seen this error before with beats and dashboard imports, though usually you see a red error showing that message, not a yellow error bar with the courier fetch failure.

Anyhow, beat.name should be a keyword but it looks like it's not.

Can you go into management and show what it says for beat.name?

I'm not sure how the field was set incorrectly. Are you using the same version of metricbeat as kibana and elasticsearch? Did you have metricbeat running, or dashboards imported, prior to upgrading to 5.3?

If you drill in to a time range that is more recent, do you still get the error? Sometimes you'll have data in old indexes with an invalid mapping, but data in more recent indexes (e.g. after you upgraded) in a newer format.

You should be able to set fielddata=true on that field to get it working by following this: fielddata mapping parameter | Elasticsearch Guide [8.11] | Elastic

Though if there are multiple fields that are set wrong, you may want to wipe your metricbeat index and start over to see if that helps (but only if there isn't data in it you want to preserve).

Hi Stacey,

Sorry for the late reply.

Yes its strange how fielddata is disabled for me, where as while i was using ELk stack 5.2 version i dint faced any issue and then now after using ELK 5.3 version, i can see fielddats disabled for me.

and also first time im facing courier fetch failure error.

Please help me how do i amke beat.name = keyword.
please find below image for management showing for beat.name

Yes, im using the same version of metricbeat as kibana and elasticsearch (ELK 5.3 including beats) and 5.3 dashboards imported, prior to upgrading to 5.3 also.

This issue seems to coming intermediate where 1 days it dint appeared after all now i can see continuously error appearing in recent days.

Please help in making fielddata=true where as i couldnt able to understand from below link.
https://www.elastic.co/guide/en/elasticsearch/reference/current/fielddata.html

I can also see in my metricbeat.template.json file where fileddata for beat,name is keyword = true.

"beat": {
"properties": {
"hostname": {
"ignore_above": 1024,
"type": "keyword"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}

correct me if im wrong.

Hi Team,

Can anyone let us know how to resolve this issue please.

Hi Team,

Does anyone reply back to us, We are stuck up in middle of ELK environment awaiting for your response. please give us some solution.

Thanks

Can you post the output of

GET metricbeat*/_mapping/metricsets/field/beat.name

The screenshot you shared shows the right data, so I'm not sure why beat.name is throwing that error.

If you go back to the dashboard and shorten the time range, so it's something like Last 15 minutes, do you continue to see the error?

Hi Stacey,

Thanks for getting back to us.

Yes for the input
GET metricbeat*/_mapping/metricsets/field/beat.name

Please find the below output

{
  "metricbeat-2017.04.24": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.23": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.26": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.05.02": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.25": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.28": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.17": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.27": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.19": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.18": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.29": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.20": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.30": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.05.01": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.22": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  },
  "metricbeat-2017.04.21": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "keyword",
              "ignore_above": 1024
            }
          }
        }
      }
    }
  }
}

And also If i go back to the dashboard and shorten the time range as of 15 minutes, its strange that no error is displayed now.

So the issue is that some of your indexes have the wrong format for beat.name. Specifically, the ones prior to 2017.04.18. You can see they look like this:

  "metricbeat-2017.04.17": {
    "mappings": {
      "metricsets": {
        "beat.name": {
          "full_name": "beat.name",
          "mapping": {
            "name": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        }
      }
    }
  }

You can try to updating the mapping for these indexes specifically by running this command:

PUT metricbeat-2017.04.18/_mapping/metricsets
{
  "properties": {
    "beat.name": { 
      "type":     "text",
      "fielddata": true
    }
  }
}

and the same for the rest of your older indexes (metricbeat-2017.04.17, etc)

Or, depending on how far back you need that data to go, you can just keep the time range so it doesn't pick up the indexes prior to 4-19.

Thank you so much Stacey,

The issue seems to be resolved now with fielddata, but we can see there comes error with below message regarding shards too when i load dashboard of metricbeat.

Courier fetch: 10 of 85 shards failed.

please find the below error message and let us know what would be the issue for this.

That's really weird @Sujith. Are there any error messages in the console or the terminal? Does it happen for every dashboard, or just that one? What does Monitoring look like, maybe there is a legitimate issue with some of your shards? Do you see it when you open up a visualization in visualize, or navigate to discover, or just on dashboard?

Yeah that is a strange behavior happening with me in 5.3 version. There are only error message on kibana console. It happens only in metricbeat Dashboard whereas not daily, may be sometimes only the error comes up. When i open any visualization or navigate discover, this error wont turn up. Monitoring look like there might be an issue in shards where i can see all the active shards are unassigned shards.

{
"cluster_name" : "AON_ECM_ELK",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 615,
"active_shards" : 615,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 615,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

May be can you help in shards to be assigned rather than being unassigned shards, that might resolve the issue hope so.

Hoping for solution from your side.

Hi Team,

Does anyone reply back to us, We are stuck up in middle of ELK environment awaiting for your response. please give us some solution.

Many Thanks

Hi Team,

any update please :disappointed:

Sorry for the delay. I suggest asking about the shard failures in the elasticsearch channel. The issue may not be specifically with Kibana, but an issue with some of your shards.

One last thing to try, because I just ran into this error myself, and it was due to a faulty painless scripted field.

Do you have any painless scripted fields? If so, perhaps try removing them?

Though for my situation, the error was 1 of 2 shards failed, and no data was retrieved at all, for the discover page.

Sorry for the late reply as i was on vacation Stacey.

yes i checked from my end and we can see there is no painless scripted fields used.

Or may be it might be of different issue.