Elasticsearch snapshot restoration error for system indices

Im trying to restore a snapshot(snapshot of entire cluster) and exclude internal indices as recommended in documentation (Restore a snapshot | Elasticsearch Guide [7.17] | Elastic )

Although this method is supposed to be excluding system indices and other dot (. ) indices, it's still giving me error about internal indices.

root@test-search-es01.qaextranet:~# curl -k -u elastic "https://localhost:9200/_cat/indices?v"
Enter host password for user 'elastic':
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .geoip_databases HhBLRnu2TzuaE5clME08ZA   1   1         41           41     77.9mb         38.9mb
green  open   .security-7      7v01X_JsTZqI5sFkOOfPDw   1   1          6            7     42.6kb         21.3kb
root@test-search-es01.qaextranet:~#

root@test-search-es01.qaextranet:~# curl -k -u elastic -X POST "https://localhost:9200/_snapshot/onprem_to_azure/test-snapshot-2022-10-20_13-23-41/_restore" -H 'Content-Type: application/json' -d'
> {
>   "indices": "*,-.*"
> }
> '
Enter host password for user 'elastic':
{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[onprem_to_azure:test-snapshot-2022-10-20_13-23-41/q1JwXZYRSUGpRSkBZ_gLlg] cannot restore index [.ds-ilm-history-5-2022.10.17-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"}],"type":"snapshot_restore_exception","reason":"[onprem_to_azure:test-snapshot-2022-10-20_13-23-41/q1JwXZYRSUGpRSkBZ_gLlg] cannot restore index [.ds-ilm-history-5-2022.10.17-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"},"status":500}root@test-search-es01.qaextranet:~#
root@test-search-es01.qaextranet:~#

I tried to also specifically exclude the index that gave error and still it doesnt work.

root@test-search-es01.qaextranet:~# curl -k -u elastic -X POST "https://localhost:9200/_snapshot/onprem_to_azure/test-snapshot-2022-10-20_13-23-41/_restore" -H 'Content-Type: application/json' -d'
{
  "indices": "*,-.*,-.ds*"
}'
Enter host password for user 'elastic':
{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[onprem_to_azure:test-snapshot-2022-10-20_13-23-41/q1JwXZYRSUGpRSkBZ_gLlg] cannot restore index [.ds-ilm-history-5-2022.10.17-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"}],"type":"snapshot_restore_exception","reason":"[onprem_to_azure:test-snapshot-2022-10-20_13-23-41/q1JwXZYRSUGpRSkBZ_gLlg] cannot restore index [.ds-ilm-history-5-2022.10.17-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"},"status":500}root@test-search-es01.qaextranet:~#

indices

(Optional, string or array of strings) Comma-separated list of indices and data streams to restore. Supports multi-target syntax. Defaults to all regular indices and regular data streams in the snapshot.

You can’t use this parameter to restore system indices or system data streams. Use feature_states instead

So the indices field had no effect on internal indices.. as I read it

So perhaps look at feature_state and include_global_state

I suspect you may want

include_global_state: false

and/or

feature_states: ["none"]

But I would read through all that it has changed since 7.x

Ohh darn.. is this restore on a 7.x cluster?

yeah. backup and restore was done on 7.17.0

Weird....Can you try the same command from within the Kibana -> Dev Tools?

None of these worked.

root@d8812acff23c:~# curl -X POST "localhost:9200/_snapshot/azure_repo/my_snapshot_2/_restore" -H 'Content-Type: application/json' -d'
> {
> "indices": "*,-.*",
> "include_global_state": false
> }
> '
{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[azure_repo:my_snapshot_2/xmdavGDARfGZFm-_kp8x-g] cannot restore index [.ds-ilm-history-5-2022.10.13-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"}],"type":"snapshot_restore_exception","reason":"[azure_repo:my_snapshot_2/xmdavGDARfGZFm-_kp8x-g] cannot restore index [.ds-ilm-history-5-2022.10.13-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename dices": "*,-.*",^C"},"status":500}root@d8812acff23c:~# "include_global_state": false
root@d8812acff23c:~# ^C
root@d8812acff23c:~# ^C
root@d8812acff23c:~# ^C
root@d8812acff23c:~# curl -X POST "localhost:9200/_snapshot/azure_repo/my_snapshot_2/_restore" -H 'Content-Type: application/json' -d'
> {
> "indices": "*,-.*",
> "include_global_state": false,
> "feature_states": ["none"]
> }
> '
{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[azure_repo:my_snapshot_2/xmdavGDARfGZFm-_kp8x-g] cannot restore index [.ds-ilm-history-5-2022.10.13-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"}],"type":"snapshot_restore_exception","reason":"[azure_repo:my_snapshot_2/xmdavGDARfGZFm-_kp8x-g] cannot restore index [.ds-ilm-history-5-2022.10.13-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"},"status":500}root@d8812acff23c:~#

The only thing that works for me is taking a snapshot with just "indices": "*,-.*" and restoring with the same "indices": "*,-.*". This helps restore only non internal indices.

root@d8812acff23c:~# curl -X PUT "localhost:9200/_snapshot/azure_repo/my_snapshot_5?wait_for_completion=true&pretty" -H 'Content-Type: application/json' -d'
> {
> "indices": "*,-.*"
> }
> '
{
  "snapshot" : {
    "snapshot" : "my_snapshot_5",
    "uuid" : "Us1U3C9nQOqMKGLcVWZa1A",
    "repository" : "azure_repo",
    "version_id" : 7170099,
    "version" : "7.17.0",
    "indices" : [
      "my-index-000001",
      ".geoip_databases"
    ],
    "data_streams" : [ ],
    "include_global_state" : true,
    "state" : "SUCCESS",
    "start_time" : "2022-10-27T08:46:01.009Z",
    "start_time_in_millis" : 1666860361009,
    "end_time" : "2022-10-27T08:46:04.012Z",
    "end_time_in_millis" : 1666860364012,
    "duration_in_millis" : 3003,
    "failures" : [ ],
    "shards" : {
      "total" : 2,
      "failed" : 0,
      "successful" : 2
    },
    "feature_states" : [
      {
        "feature_name" : "geoip",
        "indices" : [
          ".geoip_databases"
        ]
      }
    ]
  }
}
root@d8812acff23c:~#

root@d8812acff23c:~# curl localhost:9200/_cat/indices?v
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .geoip_databases 0jQEhaCfQOm2Dt6jhUPMlg   1   1         41           41     78.2mb         39.1mb
green  open   my-index-000001  1UQIYamxQwiA2lOkvZA3bg   1   1          1            0      5.1kb          4.9kb
root@d8812acff23c:~# 
root@d8812acff23c:~# curl -X DELETE localhost:9200/my-index-000001
{"acknowledged":true}root@d8812acff23c:~# 
root@d8812acff23c:~# 
root@d8812acff23c:~# curl -X POST "localhost:9200/_snapshot/azure_repo/my_snapshot_5/_restore" -H 'Content-Type: application/json' -d'
> {
> "indices": "*,-.*"
> }'
{"accepted":true}root@d8812acff23c:~# 
root@d8812acff23c:~# 
root@d8812acff23c:~# 
root@d8812acff23c:~# curl localhost:9200/_cat/indices?v
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .geoip_databases 0jQEhaCfQOm2Dt6jhUPMlg   1   1         41           41     78.2mb         39.1mb
green  open   my-index-000001  jL16ryOcSO2KFdt_AhrJvA   1   1          1            0      9.8kb          4.9kb
root@d8812acff23c:~# 

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.