Kibana aliases conflicting when restoring a snapshot

I'm trying to restore a snapshot created on a Elastic Stack cluster version 7.9.1 into another cluster version 7.14.0, but it seems the restore action is corrupting the indexes and after the attempt I'm not longer able to open kibana.
When the restore process is in progress I get the following message when I try to access kibana

{"statusCode":400,"error":"Bad Request","message":"[alias [.kibana] has more than one index associated with it [.kibana_7.14.0_001, .kibana_1], can't execute a single index op: illegal_argument_exception: [illegal_argument_exception] Reason: alias [.kibana] has more than one index associated with it [.kibana_7.14.0_001, .kibana_1], can't execute a single index op]: alias [.kibana] has more than one index associated with it [.kibana_7.14.0_001, .kibana_1], can't execute a single index op"}

and after the restore is complete, I only get the following message when trying to access kibana

{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred."}

I have tried several scenarios on the restore options starting in a clean environment but I just get the same result.

Am I missing any previous steps or doing something wrong?

Any suggestions what can I try?

When you do a restore, did you try excluding system indices - Restore a snapshot | Elasticsearch Guide [7.16] | Elastic

But that would exclude .kibana index, which would miss all the Kibana objects, isn't it?

It would, so if you want to use the snapshotted one you need to delete the one in the cluster, or use rename.

So, should I try deleting the .kibana index and then restore the snapshot?

As long as you don't have anything in there you need (or you have a backup).

Clearing all indices before restoring the snapshot worked!
Thank you for the suggestion @warkolm!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.