I have recently setup curator to delete old data but since then I am seeing a large increase in errors for dangling index and it relates to .Kibana which doesn't get touched by curator.
[2018-11-21T07:45:45,876][WARN ][o.e.g.DanglingIndicesState] [hostname] [[.kibana/C99USia3ThKbLBNRTsRZdQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
I am however not sure if that alone explains the issue you are seeing, although it certainly could contribute. What is the specification of the hardware you are using? What type of storage do you have? Is there anything in the Elasticsearch logs?
The spec of the server is:
16GB RAM
Intel Xeon E5 4 Processors
The storage is local on the server as a normal hdd
The only entry in the elastic logs is the [2018-11-21T07:45:45,876][WARN ][o.e.g.DanglingIndicesState] [hostname] [[.kibana/C99USia3ThKbLBNRTsRZdQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata multiple times every few hours
With a general guideline of 20 shards per gigabyte of heap, you would need a heap of 40G on each of two nodes to accommodate that many shards. Since the maximum recommended heap size is 30G, that would mean 3 nodes, even. I am confident in stating that your single node is very over allocated. With 16G of physical memory, this means you should only have an 8G heap, which limits you to 160 shards before you start being affected by memory pressure issues.
If you're planning on keeping with a single node, you should consider reducing things and ensuring that new indices get only 1 primary shard and 0 replicas, since you would need a second node to provide replicas.
Those single shards can get quite large without any problem, too. A single shard can easily and comfortably grow to 50G. If you're using time-series data (i.e. logs & metrics), using the Rollover API and rollover-compatible indices would be a remedy to over allocating shards and indices on a daily basis.
Would the number of shards cause the .Kibana index to become a dangling index? There is only 1 index for Kibana but it seems to think there are multiple.
If a cluster is significantly overburdened, any number of accompanying delays could result in something weird happening. The cluster state couldn't update in a timely fashion, for example, so Kibana tried to create another .kibana index since it couldn't see the one that was there, and then it found the previous one, leading to the dangling index.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.