I have just added a white list to one of the nodes so I could pull over some data for a feature we have been working on in developement and want to run on the main cluster now but keep all the historic data we gathered from development.
I configured the elasticsearch.yml file and then ran
systemctl restart elasticsearch.service to cycle the service on the node in question which is running Centos 7 and Elasticsearch 6.6.1
The node however failed to start. When I looked in the logs I found the following
java.lang.IllegalStateException: index and alias names need to be unique, but the following duplicates were found [.kibana (alias of [.kibana_2/15QKiqziRlGHM6AyCS0WjA])]
When I checked the cluster health it still reported as green
When I checked _cat/nodes it still thought the node that restarted was an active member although had no load stats for it, since starting writing this it has now seen that the node is not there and the cluster shows yellow.
I tried to restart it again thinking maybe it didn't start before as it saw that member as already joined but I got the same error in the logs.
We recently upgraded from 6.3.1 to 6.6.1 incase that has a bearing.
When looking at the Kibana indexes we have there is no index of ".kibana" but we do have "kibana_2" and ".kibana_1" on checking the aliases for those there are none on ".kibana_1" but ".kibana_2" holds the alias of ".kibana"
My only thought is that I should reindex the .kibana_2 index into a holding index such as "backup_kibana", it holds 24 documents at the moment a combination of index patterns and visualisations, I then delete the ".kibana_2" index and restart the servicewith the index now gone. if all this does it make it moan about some other index then those I'm in less of a position to reindex about.