Cloud isn't restoring from snapshot

This is a pretty basic use case, don't know how this isn't caught yet in testing, but I can't bring a snapshot I have over because of the .kibana index (and .apm-*) which come default on the new instance. I have 100's of indices to import so I also can't manually pull in each one that I need. So then my only option is to write out all 300 indices manually, excluding the .kibana index. Am I missing something?

failed to restore snapshot java.lang.IllegalStateException: index and alias names need to be unique, but the following duplicates were found [.kibana (alias of [.kibana_1/k-nvOLVkSp-gKDdnJ227RQ])] at org.elasticsearch.cluster.metadata.MetaData$ ~[elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.ClusterState$Builder.metaData( ~[elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.snapshots.RestoreService$1.execute( ~[elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.ClusterStateUpdateTask.execute( ~[elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.service.MasterService.executeTasks( ~[elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs( ~[elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.service.MasterService.runTasks( [elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.service.MasterService$ [elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed( [elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.cluster.service.TaskBatcher$ [elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ [elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean( [elasticsearch-7.2.0.jar:7.2.0] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$ [elasticsearch-7.2.0.jar:7.2.0] at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?] at java.util.concurrent.ThreadPoolExecutor$ [?:?] at [?:?]

There's a couple of options:

  • If you don't want to overwrite your existing indices, you can just do (eg) -.kibana* to exclude Kibana
  • If you do want to overwrite an existing index then you can stop Kibana (or put stop routing to all nodes in the cluster, to stop Kibana recreating the indices), delete all the .kibana* indices/aliases, then restore from snapshot, then restart Kibana.

Neither of those solutions are working for me.

When I stop the routing, I can't delete the indices because I get a 503: service unavailable.

When I try the -.kibana* I get an error that there are no indices with that name. Wildcard doesn't work, but also when I explicitly mention -.kibana_1, still no index found.

If you use the console API (rather than curl), then the close/delete should work even after stopping the routing ... either that or we've sneaked a bug into the console API recently, since that used to work (I'll check)

On the -.kibana* not working - interesting .. the API definitely lets you do that, you might need to use the ES API (

I'll create a bug report internally, we should fix that

When you say console api, do you mean the cloud console, stop routing button? Or within Kibana?

Sorry - Console API = the "Elasticsearch > API console" entry under the deployment (eg URL deployment/:id/elasticsearch/console) - that is "immune" to the effects of the stop routing function

I also looked into the snapshot UI. Apparently those spurious validation checks were just removed but unfortunately aren't slated to go into ECE until 2.4, so in the meantime it is necessary to use the ES API

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.