I've looked at the other tickets for similar issues and have not yet found a solution or even suggestion for a next step.
On ElasticCloud 7.15 I have a repo setup to backblaze and it's working well enough (a little slow but )...
On Elasticsearch on-prem 8.15 when I attempt to configure the repo it fails immediately with the following
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[backblaze] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[backblaze] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Unable to upload object [tests-poJcFn-YSBCXBlSMlWqpxw/master.dat] using a single upload",
"caused_by": {
"type": "sdk_client_exception",
"reason": "sdk_client_exception: Failed to connect to service endpoint: ",
"caused_by": {
"type": "i_o_exception",
"reason": "Connect timed out"
}
}
}
},
"status": 500
}
The logs say similar StackTrace
[2024-08-26T23:20:02,161][WARN ][r.suppressed ] [elastic-tiebreaker] path: /_snapshot/backblaze, params: {repository=backblaze}, status: 500
org.elasticsearch.transport.RemoteTransportException: [elastic1][10.10.10.69:9300][cluster:admin/repository/put]
Caused by: org.elasticsearch.repositories.RepositoryVerificationException: [backblaze] path is not accessible on master node
Caused by: java.io.IOException: Unable to upload object [tests-ZX2GBMIhQFuPJU79lCqUlw/master.dat] using a single upload
at org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:460) ~[?:?]
at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$1(S3BlobContainer.java:138) ~[?:?]
at java.security.AccessController.doPrivileged(AccessController.java:571) ~[?:?]
at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:37) ~[?:?]
at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:136) ~[?:?]
at org.elasticsearch.common.blobstore.BlobContainer.writeBlob(BlobContainer.java:123) ~[elasticsearch-8.15.0.jar:?]
at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlobAtomic(S3BlobContainer.java:298) ~[?:?]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:2154) ~[elasticsearch-8.15.0.jar:?]
at org.elasticsearch.repositories.RepositoriesService.lambda$validatePutRepositoryRequest$11(RepositoriesService.java:361) ~[elasticsearch-8.15.0.jar:?]
at org.elasticsearch.action.ActionRunnable$1.doRun(ActionRunnable.java:36) ~[elasticsearch-8.15.0.jar:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:984) ~[elasticsearch-8.15.0.jar:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.15.0.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1570) ~[?:?]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: sdk_client_exception: Failed to connect to service endpoint:
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100) ~[?:?]
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:70) ~[?:?]
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:75) ~[?:?]
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66) ~[?:?]
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:60) ~[?:?]
...skipping...
I have followed as many different tutorials I can find. The common steps are
- add the appkey and secret to the ES keystore using CLI and use the
/_nodes/reload_secure_settings
api to sync nodes - use the elasticsearch.yml config to add the client settings (I couldn't find a tutorial that made this totally clear as to what I needed to add
- use the snapshot API or kibana to register the repo (both fail immediately)
The JSON I'm using to register in the API (i've used the most simple version with min settings, this is the latest test):
{
"type": "s3",
"settings": {
"bucket": "elasticsearch-onprem",
"endpoint": "s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"compress": "true",
"server_side_encryption": "true",
"client": "default",
"path_style_access" : "true",
"protocol": "https",
"max_retries": "10",
"read_timeout": "2m"
}
}
relevant elasticsearch.yml
config
s3.client.backblaze.endpoint: "s3.us-west-004.backblazeb2.com"
s3.client.backblaze.path_style_access: true
I've tried using s3.client.default
too and there is no difference, all fail.
I reached out to backblaze who suggested increasing the timeout, but when I try to configure the repo it immediately fails and doesn't respect the timeouts/retries