DANGLING_INDEX_IMPORTED: How to cleanup old deleted indices

Hello together,

right now we have a 7 Node Cluster with Version 6.3.0. One Node is only for kibana. Every other Node hast the following options in elasticsearch.yml:

node.master: True
node.data: True
node.ingest: True

These are my _cluster/settings:

GET _cluster/settings
{
  "persistent": {
    "xpack": {
      "monitoring": {
        "collection": {
          "enabled": "true"
        }
      }
    }
  },
  "transient": {
    "cluster": {
      "routing": {
        "allocation": {
          "disk": {
            "watermark": {
              "low": "92%",
              "flood_stage": "99%",
              "high": "97%"
            }
          }
        }
      },
      "info": {
        "update": {
          "interval": "1m"
        }
      }
    },
    "discovery": {
      "zen": {
        "minimum_master_nodes": "4"
      }
    }
  }
}

Since a couple of weeks i have a red cluster cause a lot of shards do not have a valid shard copy.

GET /_cat/shards?v&h=index,shard,prirep,state,unassigned.reason&s=state
index                                     shard prirep state      unassigned.reason
.monitoring-logstash-6-2019.02.25         0     p      UNASSIGNED DANGLING_INDEX_IMPORTED
.monitoring-logstash-6-2019.02.25         0     r      UNASSIGNED DANGLING_INDEX_IMPORTED
.monitoring-es-6-2019.02.21               0     p      UNASSIGNED DANGLING_INDEX_IMPORTED
.monitoring-es-6-2019.02.21               0     r      UNASSIGNED DANGLING_INDEX_IMPORTED

There are many more. All of them have been deleted weeks ago from curator. First i tried to delete these indices with:

DELETE .monitoring-logstash-6-2019.02.25,.monitoring-logstash-6-2019.02.25,.monitoring-es-6-2019.02.21,.monitoring-es-6-2019.02.21

But this seems to be an endless story. Do anyone know how to find the root cause and fix it?

Regards,

Christian

Did you perhaps repurpose the node that was used for Kibana, i.e. was it master- or data-eligible at some point? If so, you've hit https://github.com/elastic/elasticsearch/issues/27073. If the Kibana node previously contained metadata/data, this can cause this situation. The solution is to clean the data folder on the coordinating-only node.

In 7.0, we refuse to start a coordinating-only node that has local shard data.

Hey Yannick,

i am not sure but it could be possible that there was a misconfiguration some weeks ago. I did have the following folder in our datafolder:

nodes/0/indices

I did the folowing steps:

  1. Shut down elasticsearch
  2. Delete the whole folder nodes
  3. Start elasticseach

The subfolder indices was not created anymore.

After deleting the indexes of every 48 unasigned shard the cluster state is going back to green.

Thanks for your help.

Regards

Christian

great, happy to hear that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.