I am running a couple of Elastic clusters, both were running 8.13.2. Then I upgraded the larger cluster to version 15.3 using a rolling upgrade, switched all kibana nodes off and then upgraded kibana and all went well. Then I upgraded the smaller cluster, turned off shard allocation, upgraded the elastic nodes, all went well. Then I upgraded kibana, unfortunately I forgot to turn shards allocation on before I upgraded kibana. I have 2 kibana nodes, I upgraded the first one only while the second is off. Now when I go to Kibana in the browser I get Kibana server is not ready yet. I checked the .kibana alias it is pointing to .kibana_8.8.0_001 and the .kibana_task_manager pointing to .kibana_task_manager_8.7.0_001. However, when I check the indices, I only have .kibana_7.17.0_001, but I have 3 .kibana_task_manager indices; .kibana_task_manager_7.12.0_001, .kibana_task_manager_7.15.1_001,.kibana_task_manager_7.17.0_001. I have a snapshot backup, but I am not able to have kibana start. I could not delete the .kibana_task_manager indices from the command line like I used to do in the old days if kibana migration fails. Any help will be greatly appreciated. Thank you.
AJ
After going through log files, and indices through command line I gave up, nothing worked. So I created a new cluster with default settings running 8.13.0 and restored from a snapshot. A full restore failed and I could not access Kibana again, so I restored the indices and Kiabana indices through:
POST _snapshot/<my_repository>/<my_snapshot>/_restore
{
"indices": "-*",
"feature_states": ["kibana"]
}
All worked. I only lost the users, which was not an issue for me.