ELK Update - can not be imported as a dangling index, as index with same name already exists in cluster metadata

Hello,

we have updated an old version of elk from 5.6.4 to 6.8.13 (aiming to upgrade to 7.x). But when we restarted elasticsearch we had this error:

`[2020-11-24T16:29:45,854][WARN ][o.e.g.DanglingIndicesState] [coll01] [[kpi-2020.08.17/DZHkHK34T8i9eVA6EqiB2g]] can not be imported as a dangling index, as index with same name already exists in cluster metadata`

Looking at the cluster state we discovered it "red" and the explanation was:

    [elasticadm@bfasmmits01c tmp]$ curl -X GET "10.100.52.151:9200/_cluster/allocation/explain"
    {"index":"kpi-2019.10.29",
    "shard":1,
    "primary":true,
    "current_state":"unassigned",
    "unassigned_info":{
    "reason":"CLUSTER_RECOVERED",
    "at":"2020-11-24T15:28:54.650Z",
    "last_allocation_status":"throttled"},
    "can_allocate":"throttled",
    "allocate_explanation":"allocation temporarily throttled",
    "node_allocation_decisions":[
    {"node_id":"6PkGeUptQL6JNNsPPTsv4w",
    "node_name":"coll01",
    "transport_address":"10.100.52.151:9300",
    "node_attributes":
    {"ml.machine_memory":"16640462848",
    "xpack.installed":"true",
    "ml.max_open_jobs":"20",
    "ml.enabled":"true"
    },
    "node_decision":"throttled",
    "store":
    {"in_sync":true,
    "allocation_id":"q-Q8vTGyRu2LO_J9h4SZ8w"},
    "deciders":
    [{"decider":"throttling",
    "decision":"THROTTLE",
    "explanation":"reached the limit of ongoing initial primary recoveries [4], cluster setting [cluster.routing.allocation.node_initial_primaries_recoveries=4]"
    }]
    }
    ]
    }

Indices state reports many indices (but not all of them) as red, here a small sample, .kibana index is red:

    [elasticadm@bfasmmits01c tmp]$ curl -XGET "10.100.52.151:9200/_cat/indices?v"
    health status index                      uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    red    open   filebeat-2017.07.20        OGhOZgoWQkO_3NVhjoNZ8w   5   0
    red    open   filebeat-2017.05.03        tBEbEa9SSWGpwhqkAWENOQ   5   0
    red    open   filebeat-2017.12.04        wcutxFPqQK24O-YHmQWQPQ   5   0
    red    open   kpi-2020.05.25             ZSKCPp0TQaG8NPbsP5txmA   5   0         34            0      445kb          445kb
    red    open   alf-statistiche-2019.04.11 zHMNPgR_SfeA5P6OUFESHQ   5   0
    red    open   puc-statistiche-2019.02.27 W575569HT4yjHfTn2yhEZQ   5   0
    red    open   filebeat-2017.06.14        DRe7cigeSzWA_3XjdK29HA   5   0
    red    open   puc-statistiche-2019.01.28 Cwz3ld2xSjOPVJbKK0rvIg   5   0
    red    open   puc-statistiche-2019.03.12 R5zewwIVT26lgRYRg64LHA   5   0
    green  open   kpi-2020.05.30             S4ca0pkoSoWhsa5ZgOFIYg   5   0          1            0     17.3kb         17.3kb
    red    open   filebeat-2017.12.21        AQIrq9J-SDWEJoudLY58-g   5   0
    green  open   filebeat-2020.08.03        ffUxHZfPT8yNpQvitQ7rhw   5   0     129627            0     80.3mb         80.3mb
    red    open   filebeat-2017.08.11        FYmzVmpVTDS0Flf_jzDF0Q   5   0
    red    open   kpi-2020.10.22             81LuapSYTK29tM9DF2nuyg   5   0        223            0      2.7mb          2.7mb
    green  open   kpi-2020.06.26             PtTVZkIES4uiDjCODlETuQ   5   0         39            0    661.3kb        661.3kb
    green  open   kpi-2020.10.16             xsaYqkSyQfKOoO7wYkfqJw   5   0        358            0      5.6mb          5.6mb

Could you please provide support for this?

Thanks
S.

The key word here is temporarily - the cluster is making progress on allocating some other shards and will get around to this one eventually.

How many shards do you have in total? It looks like you're using daily indices even though you have a very small dataset.

Hello David,

there was a problem with the fs of one node of the cluster that was busy. I cleaned it and elastic started correctly

Thanks a lot

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.