Failing to restoring from snapshot

So I'm evaluating ECE and testing its abilities to restore from snapshot. So i had a functional ECE install with a number of deployments. All of which were sending snaps to minio regularly without issue. Before uninstalling ECE I ensure all of the deployments had recently been backed up to the S3 minio instance.

I reinstall ECE and add my repository information again. I enable snapshots for one of the default deployments to ensure minio connectivity. The new deployment was successfully able to create and upload a snapshot.

I move on to try and restore from a snapshot and none of my previous snaps are listed. I do not know what is going on here. Are there limitations to the ECE trial license that prevent me from restoring?

I can confirm that the my previous deployments were running ELK stack 7.5 and that is the same target version that i'm trying to restore to.

Snapshot Recovery on a Separate ECE Setup Does this still apply here? Are there any plans to make this process any better? As a user i expect to see the snapshots to be listed perhaps if they are unusable they can have a strike through or link to information as to why its being marked as unusable. Leaving me completely in the dark and searching forums for hours to ultimately find no good answers is very frustrating.

Thanks

The uuid id on your new deployment is differe t so the snapshot will be only looking in one place.

you could manually via the api add a snapshot location to restore some snapshot data from the original location.

It took me awhile to make sense of it and verify it was safe.

If you cant figure it out ill post an example in the next few days when im back from holiday, i have the exact commands in my playbook.

Jugsofbeef,

Thanks for that offer it would be greatly appreciated. I'm not sure why this snapshot / restore system needs to behave this way other than making it difficult to operate ECE without a license. Almost any other application is able to see its previously created snaps and restore from them in a painless way. This is tedious and has led to data loss and a lot of frustration. Fortunately i'm only evaluating ECE currently, but making it so difficult to recover from a disaster does not encourage me to buy a license. That said I won't be reinstalling ECE very often or at all once a license is in place. Anyways I'll take these concerns up with my sales rep. Looking forward to seeing your solution. Thanks again.

Hi,

Have a read of these that I did for the same thing.

ECE interface isnt quite as up to date as the UI or operates for all things a customer might want so best to use the api for these things.. Ive logged enhancement requests but this should help.

I havent had time to write explanations but these should do the trick.


GET /_snapshot

PUT /_snapshot/my_manual_backup
{
  "type": "s3",
  "settings": {
    "bucket": "bucket_name",
    "base_path": "manual/uuid",
    "endpoint": "domain.name.goes.here.or.ip.address",
    "protocol": "https",
    "access_key": "access_key",
    "secret_key": "secret_key"
  }
}

GET /_snapshot

PUT /_snapshot/my_manual_backup/%3Csnapshot-%7Bnow%7BYYYY.MM.dd_HH-mm-ss%7D%7D%3E?wait_for_completion=false
{
  "indices": "kibana_sample_data_ecommerce,kibana_sample_data_logs",
  "ignore_unavailable": true,
  "include_global_state": false,
  "metadata": {
    "taken_by": "Test Person",
    "taken_because": "Test Backup v1"
  }
}

GET /_cat/snapshots/my_manual_backup?v=true
GET /_snapshot/my_manual_backup/snapshot-2019.09.26/_status

## the ignore_index_settings is important to get right as it will result in a red cluster state if you miss thingsout.

POST /_snapshot/my_manual_backup/snapshot-2019.09.26/_restore
{
  "indices": "*",
  "ignore_unavailable": true,
  "include_aliases" : true,
  "index_settings": { "index.number_of_replicas": 0 },
  "ignore_index_settings": ["index.refresh_interval","index.routing.allocation.include.instance_configuration"],
  "rename_pattern": "(.+)",
  "rename_replacement": "restored_$1"
}

A repository can be unregistered using the following command:
When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing the snapshots. 
The snapshots themselves are left untouched and in place.
DELETE /_snapshot/my_manual_backup
1 Like

The bucket name and endpoint shoudl match your existing minio settings, but the base path should be set to the value of the OLD uuid where your OLD snapshots had been going.

This part had me confused for awhile but eventually i realised my "manual" snapshot was reading form the OLD location and the 'found-snapshots' was reading and writing to the NEW location. Dont mess up the found-snapshots... thats the ECE managed one :slight_smile:

@Rob_wylde Yep as @Jugsofbeer (thanks!) explained, there's a functional limitation of built-in snapshot repos - that they are tied to deployments (as opposed to crawling the minio FS), so if a deployment is deleted (or never existed in an ECE deployment) then in order to bring the data in you have to add the repo by hand via either the ES or ECE API

Your points about how and why this is frustrating are well made - there are good implementation reasons why it works the way it currently does (there is some redesign work going on), but it should at the very least be better documented, I agree, and I'll raise an issue for that.

Some combination of the link you posted and the posts in this thread should work - let us know if you're still having issues

Alex

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.