Shard failure during kibana visualisation and 'Discover'

Hi,

I'm upgrading elasticsearch cluster from 5.x to 7.0. I have setup a 7.0.0 cluster and then reindexed data from 5.x cluster. And then exported all the saved objects from old cluster to new one.

post this, I'm getting error of 'x of y shards failed' when trying to view some visualisation or search on some indexes.

There's nothing about this failure on elasticsearch logs. So how should i fix this error?

Thanks for the help!

Can somebody help me with this? There's nothing in elasticsearch logs. And all shards are 'green' clsuter status is green. I don't know how to proceed with this.

Ordinarily our upgrade procedures are designed to take you from the last minor release of a major to the next major e.g 6.7 to 7. I suspect this is an issue with the format of the saved 5.x kibana visualisations and the format required by 7. Probably best to ask in the Kibana forum

Thanks for the reply @Mark_Harwood

Yes I'll ask it in Kibana forum too. But this problem is happening on 'Discover' tab too. That's what led me think that it might be an elasticsearch issue too.

What do you think about shard failure problem showing up on 'Discover' tab?

Is it possible you have mixed es versions in your cluster?
Hard to say what is going on without seeing details of the failures I’m afraid

No, All nodes are 7.0.0. And cluster status is green.

{
  "cluster_name" : "elasticsearch_cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 643,
  "active_shards" : 1286,
  "relocating_shards" : 2,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue" : "0s",
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent" : "100.0%",
  "active_shards_percent_as_number" : 100.0
}

And there's no clue about the error in neither kibana or elasticsearch logs.

I'm running both Elasticsearch and kibana in docker containers. So i'm checking docker logs of those containers. Is there anywhere else I can get the info about these failures?

It’s possible to change logging levels to see if that helps but I think my next step would be to simplify to the smallest reproducible example eg first try take Kibana out of the equation and run the query it runs via curl. Then reduce the number of indices you’re searching and then number of query clauses etc

Thanks Mark, I'll try that.

I had it where I was using a wild card in the Kibana index pattern to pick up multiple indicies

e.g

index -> alias
data_type1_001 -> MYCO_DATA_type1
data_type1_002 -> MYCO_DATA_type1
data_type2_001 -> MYCO_DATA_type2

For MYCO_Data_type1 the field definitions in the data_type1_001 and data_type1_002 indices need to be the same

For MYCO_Data* the field definitions in the the data_type1_001 and data_type1_002 AND data_type2_001 indices need to be the same

Make sure you haven't pre defined one field as a keyword and then in the other index where you haven't ES has done the text with sub key keyword

Mapping from template:

{
  "data_type1_001" : {
    "aliases" : {
      "MYCO_DATA_type1" : { }
    },
    "mappings" : {
      "properties" : {
        "meta_data_object" : {
          "properties" : {
            "meta_data_object_field_1" : {
              "type" : "keyword"
            },
[...]

Mapping that happens when ES "guesses"

{
  "data_type1_002" : {
    "aliases" : {
      "MYCO_DATA_type1" : { }
    },
    "mappings" : {
      "properties" : {
        "meta_data_object" : {
          "properties" : {
            "meta_data_object_field_1" : {
              "type" : "text",
              "fields" : {
                "keyword" : {
                  "type" : "keyword",
                  "ignore_above" : 256
                }
              }
            },
[...]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.