ES 6.7.2 - "failed to read local state, exiting" on minor k8s pod updates (not ES upgrade)

Our CI/CD is hung on a failed data pod (data-1) that cannot startup due to the following error/exception:

[2019-07-16T16:39:30,650][ERROR][o.e.g.GatewayMetaState   ] [stgsiemv2-elasticsearch-data-1] failed to read local state, exiting...
java.lang.IllegalStateException: index and alias names need to be unique, but the following duplicates were found [.kibana (alias of [.kibana_1/FJprNkLuQi2B6pRnZpbP6w])]
	at org.elasticsearch.cluster.metadata.MetaData$Builder.build(MetaData.java:1118) ~[elasticsearch-6.7.2.jar:6.7.2]

I'd like to know how it got in this state, but right now my biggest concern is how to safely get out of it without wiping stored data in the .kibana* indices (I feel like I could probably resolve this by deleting the indices, but would like to avoid that). I did try reindexing .kibana_1 to .kibana_3 in hopes that it would eliminate the conflict. At this point I have the alias ".kibana" pointing to .kibana_1, but I also have 3 other .kibana indices (.kibana1, .kibana2, .kibana3 - all reindexes in attempt to free up the conflict). None of the above works. The data node still exists with this error.

Is the cluster health green or yellow without that data node? If so, you can just wipe the data folder on the node and bring it back online.

Also, can you check past logs of elasticsearch-data-1 and see if there are warnings about dangling indices?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.