Hi There,
I have upgraded our Elastic Stack from 7.6.1 to 7.7 a couple of days after 7.7 came out. I have literally upgraded everything, Elasticsearch, Kibana, Logstash and all Beats. About a week ago I have set up snapshots to make sure everything critical is backed up. Today I wanted to test and document how a restore process would work, in case we need one. I wanted to do a partial restore, that is, to restore only .kibana_1
and .kibana_task_manager_1
from last night's snapshot.
Turned off Kibana and went ahead to find the appropriate snapshot (it's the last one in the list)
curl --insecure --user myuser:mypassword https://localhost:9200/_cat/snapshots/Archive
daily-snapshot-2020.06.23-1g8xtlb1rr2sqwc1_5glzw SUCCESS 1592877600 02:00:00 1592877603 02:00:03 2.6s 6 6 0 6
daily-snapshot-2020.06.24-y-2ueb6iskcfal4ir-sglw SUCCESS 1592964000 02:00:00 1592964003 02:00:03 2.7s 6 6 0 6
daily-snapshot-2020.06.25-zguowl7ftymk7yjptt3mew SUCCESS 1593050401 02:00:01 1593050404 02:00:04 3.4s 6 6 0 6
daily-snapshot-2020.06.26-vlrdsu4dqik3frcolotyua SUCCESS 1593136800 02:00:00 1593136803 02:00:03 2.6s 6 6 0 6
daily-snapshot-2020.06.27-iobuzcx_qbiq-2fgwjc1ta SUCCESS 1593223200 02:00:00 1593223203 02:00:03 3s 6 6 0 6
daily-snapshot-2020.06.28-rquku6zisvojm1_cilsq0a SUCCESS 1593309600 02:00:00 1593309603 02:00:03 3.2s 6 6 0 6
daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a SUCCESS 1593396001 02:00:01 1593396006 02:00:06 5.2s 6 6 0 6
Attempted to restore - failed with descriptive error
curl --user myuser:mypassword --header "Content-Type: application/json" --insecure -X POST --data "{\"indices\":\".kibana_1,.kibana_task_manager_1\"}" https://localhost:9200/_snapshot/Archive/daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a/_restore
{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[Archive:daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a/roc57QK3TX2S8XYUGkpUkw] cannot restore index [.kibana_task_manager_1] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"}],"type":"snapshot_restore_exception","reason":"[Archive:daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a/roc57QK3TX2S8XYUGkpUkw] cannot restore index [.kibana_task_manager_1] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"},"status":500}
Listed all existing kibana indices to verify
curl --insecure --user myuser:mypassword https://localhost:9200/_cat/indices | grep kibana
green open .kibana_task_manager_1 KatrKiZRRBie6u4y_V1tmA 1 0
1green open .monitoring-kibana-7-2020.06.25 7kVaqeo_QTqxUkFD0zz5nw 1 0 8633 0 1.6mb 1.6mb
0green open .monitoring-kibana-7-2020.06.24 CRy3kbLGTXqwoF8mzdZy9g 1 0 8626 0 1.7mb 1.7mb
0green open .monitoring-kibana-7-2020.06.27 tqIkH5hYSmCBVm9FIJX0sA 1 0 8637 0 1.6mb 1.6mb
green open .monitoring-kibana-7-2020.06.26 WzJ-vN9bQNyoHv41DsXxMA 1 0 8637 0 1.7mb 1.7mb
1green open .monitoring-kibana-7-2020.06.23 hqiIe3ueRSuS2pgZXAW1yQ 1 0 8627 0 1.7mb 1.7mb
8green open .monitoring-kibana-7-2020.06.29 pjq8XnEyS06rjMaFeAwpAA 1 0 8277 0 1.7mb 1.7mb
6green open .monitoring-kibana-7-2020.06.28 FaAgARQUQsm6-ptbarLoLg 1 0 8634 0 1.7mb 1.7mb
1green open .monitoring-kibana-7-2020.06.30 Q425XBhfSnWEWtJvYIE_Gg 1 0 5 0 60.3kb 60.3kb
8 100 18618 0 0 85403 0 --:--:-- --:--:-- --:--:-- 85013
green open .kibana_1 haYCu0oPTY6kK9I0uVYHcQ 1 0
Deleted the two Kibana indices. "I have backups anyway, what can go wrong?"
curl --user myuser:mypassword --insecure -X DELETE https://localhost:9200/.kibana_1
{"acknowledged":true}
curl --user myuser:mypassword --insecure -X DELETE https://localhost:9200/.kibana_task_manager_1
{"acknowledged":true}
Successfully restored indices from last night's snapshot
curl --user myuser:mypassword --header "Content-Type: application/json" --insecure -X POST --data "{"indices":".kibana_1,.kibana_task_manager_1"}" https://localhost:9200/_snapshot/Archive/daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a/_restore
{"accepted":true}
Tried to start Kibana. It could not load and found this in the Kibana logs.
FATAL Error: Index .kibana_1 belongs to a version of Kibana that cannot be automatically migrated. Reset it or use the X-Pack upgrade assistant.
Details of the snapshot in question. Notice that it says "version":"7.7.0".
curl --insecure --user myuser:mypassword https://localhost:9200/_snapshot/Archive/daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a
{"snapshots":[{"snapshot":"daily-snapshot-2020.06.29-xmz7h4i9t2a5egj3wdiu4a","uuid":"roc57QK3TX2S8XYUGkpUkw","version_id":7070099,"version":"7.7.0","indices":[".kibana_1",".apm-custom-link",".security-7",".apm-agent-configuration",".kibana_task_manager_1",".async-search"],"include_global_state":true,"metadata":{"policy":"daily-snapshots"},"state":"SUCCESS","start_time":"2020-06-29T02:00:01.258Z","start_time_in_millis":1593396001258,"end_time":"2020-06-29T02:00:06.484Z","end_time_in_millis":1593396006484,"duration_in_millis":5226,"failures":[],"shards":{"total":6,"failed":0,"successful":6}}]}
Now I ended up in a limbo, where my old indices are gone, and all my saved objects with it, but I can't load Kibana with the restored indices either. Further, what the error says is not true. The snapshot was made with v7.7 (see above) and is has been restored to the exact same v7.7 instance that it was made of.
Can someone please help.
Cheers,
Laz