Update:
I did some work this week that has almost accomplished what you're looking for here but I have a problem. My steps were:
- "GET" "/_snapshot/found-snapshots" from deployment-A's API and store
repo_info = response.json()["found-snapshots"]
- register the repo on deployment-B as read-only:
@dataclass
class ElasticCluster:
"""Connection details for Elasticsearch API."""
elasticsearch_host: str
elasticsearch_api_key: str
# ...
repo_info = copy.deepcopy(repo_info)
# https://www.elastic.co/guide/en/elasticsearch/reference/current/repository-s3.html
repo_info["settings"]["readonly"] = True
# repo_info["settings"]["compress"] = False
# where es_request is just a wrapper I made around httpx that adds a header with my API key stored in target_cluster and uses elastic_search_host to update the URL
res = es_request(
cluster=target_cluster,
method="PUT",
path=f"/_snapshot/{target_repo}",
json=repo_info,
)
- Now when I visit
/app/management/data/snapshot_restore/repositories/
on Deployment-B's kibana, and try to "Verify repository" on the new repo, I have an error that suggests the S3 client from Deployment-A is not accessible to deployment-B.
"Unknown s3 client name [elastic-internal-XXXXX]. Existing client configs: elastic-internal-YYYYY,default"
Interestingly this isn't a problem on Deployment-C for which I initially used the UI in elastic cloud to restore from a snapshot of Deployment-A. So I think the UI magically granted Deployment-C access to Deployment-A's S3 client. I'm now looking for a way to do this programatically...
- The last step I would want to do is hit Deployment-B's API requesting a restore from a desired snapshot in the repo I registered above.
Also btw I started a related thread here in February 2024 Restore from found-snapshots across clusters