Can not be imported as a dangling index

I have recently setup curator to delete old data but since then I am seeing a large increase in errors for dangling index and it relates to .Kibana which doesn't get touched by curator.
[2018-11-21T07:45:45,876][WARN ][o.e.g.DanglingIndicesState] [hostname] [[.kibana/C99USia3ThKbLBNRTsRZdQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata

Can anyone shed any light on it?

How many nodes do you have in the cluster? How many of these are master-eligible? What is minimum_master_nodes set to?

We have one node and the minimum_master_nodes is not currently specified in our Elasticsearch.yml would this be the issue?

If you only have a single node I am not sure what is going on. How many indices and shards do you have in the cluster?

we have 1584 shards and 792 shards allocated (we have the default 5 shards per index) and we have about 50 or so indices.

That sounds like quite a lot for a single node. Please read this blog post about shards and sharding and try to reduce this.

I am however not sure if that alone explains the issue you are seeing, although it certainly could contribute. What is the specification of the hardware you are using? What type of storage do you have? Is there anything in the Elasticsearch logs?

The spec of the server is:
16GB RAM
Intel Xeon E5 4 Processors
The storage is local on the server as a normal hdd
The only entry in the elastic logs is the [2018-11-21T07:45:45,876][WARN ][o.e.g.DanglingIndicesState] [hostname] [[.kibana/C99USia3ThKbLBNRTsRZdQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata multiple times every few hours

With a general guideline of 20 shards per gigabyte of heap, you would need a heap of 40G on each of two nodes to accommodate that many shards. Since the maximum recommended heap size is 30G, that would mean 3 nodes, even. I am confident in stating that your single node is very over allocated. With 16G of physical memory, this means you should only have an 8G heap, which limits you to 160 shards before you start being affected by memory pressure issues.

If you're planning on keeping with a single node, you should consider reducing things and ensuring that new indices get only 1 primary shard and 0 replicas, since you would need a second node to provide replicas.

Those single shards can get quite large without any problem, too. A single shard can easily and comfortably grow to 50G. If you're using time-series data (i.e. logs & metrics), using the Rollover API and rollover-compatible indices would be a remedy to over allocating shards and indices on a daily basis.

Would the number of shards cause the .Kibana index to become a dangling index? There is only 1 index for Kibana but it seems to think there are multiple.

If a cluster is significantly overburdened, any number of accompanying delays could result in something weird happening. The cluster state couldn't update in a timely fashion, for example, so Kibana tried to create another .kibana index since it couldn't see the one that was there, and then it found the previous one, leading to the dangling index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.