Elasticsearch s3 snapshot

Hi Good people of elastic

Am trying to upload snapshot through s3. Our elastic cluster is on top of our kubernetes cluster currently we have two Elasticsearch pod running and I already enable the s3 plugin in the image. I also uploaded the access key and secret key using this method.


kubectl exec -it <pod> -n <namespace> -- bash
echo <secret key>| bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
echo <access key>| bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key

Please note I did this on each of our elasticsearch pod. I know this is not the best practice for now this is just what I did.

After that this command i did to snapshot to s3:

PUT _snapshot/my_s3_repository
{
  "type": "s3",
  "settings": {
    "bucket": "test-bucket",
    "region": "ap-southeast-1",
    "proxy.host": "10.233.42.99",
    "proxy.port": "443",
    "endpoint": "s3-ap-southeast-1.amazonaws.com"
  }
}

This is the error message am getting

{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_verification_exception",
        "reason" : "[my_s3_repository] path  is not accessible on master node"
      }
    ],
    "type" : "repository_verification_exception",
    "reason" : "[my_s3_repository] path  is not accessible on master node",
    "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to upload object [tests-vT-sKbkqSy60VBPagl_kPg/master.dat] using a single upload",
      "caused_by" : {
        "type" : "sdk_client_exception",
        "reason" : "sdk_client_exception: Failed to connect to service endpoint: ",
        "caused_by" : {
          "type" : "i_o_exception",
          "reason" : "Connect timed out"
        }
      }
    }
  },
  "status" : 500
}

Currently am out of ideas am not entirely sure if my fields are correct. Any suggestion would be appreciated greatly

Hi just an update

removed this proxy fields and added it in elasticsearch.yml

    s3.client.default.proxy.host: 10.233.42.99
    s3.client.default.proxy.port: 443
    s3.client.default.endpoint: s3.ap-southeast-1.amazonaws.com

here is my command now:

PUT _snapshot/my_s3_repository
{
  "type": "s3",
  "settings": {
    "bucket": "test-bucket",
    "base_path": "elk-backup",
    "endpoint": "s3.ap-southeast-1.amazonaws.com"
  }
}

Same error

{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_verification_exception",
        "reason" : "[my_s3_repository] path [elk-backup] is not accessible on master node"
      }
    ],
    "type" : "repository_verification_exception",
    "reason" : "[my_s3_repository] path [elk-backup] is not accessible on master node",
    "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to upload object [elk-backup/tests-QjM6AYnjT6KptvD3Vp0aEQ/master.dat] using a single upload",
      "caused_by" : {
        "type" : "sdk_client_exception",
        "reason" : "sdk_client_exception: Failed to connect to service endpoint: ",
        "caused_by" : {
          "type" : "i_o_exception",
          "reason" : "Connect timed out"
        }
      }
    }
  },
  "status" : 500
}

Here is the snippet logs in elasticsearch

"Caused by: java.io.IOException: Unable to upload object [elk-backup/tests-a4CUlMRHQlSJ7D8LNteC6w/master.dat] using a single upload",
"at org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:349) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$1(S3BlobContainer.java:122) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:554) ~[?:?]",
"at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:37) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:120) ~[?:?]",
"at org.elasticsearch.common.blobstore.BlobContainer.writeBlob(BlobContainer.java:116) ~[elasticsearch-7.12.1.jar:7.12.1]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlobAtomic(S3BlobContainer.java:137) ~[?:?]",
"at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1293) ~[elasticsearch-7.12.1.jar:7.12.1]",
"at org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:349) ~[elasticsearch-7.12.1.jar:7.12.1]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) ~[elasticsearch-7.12.1.jar:7.12.1]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.12.1.jar:7.12.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]",
"Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: sdk_client_exception: Failed to connect to service endpoint: ",
"at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100) ~[?:?]",
"at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:70) ~[?:?]",
"at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:75) ~[?:?]",
"at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66) ~[?:?]",
"at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58) ~[?:?]",
"at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46) ~[?:?]",
"at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112) ~[?:?]",
"at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68) ~[?:?]",
"at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:166) ~[?:?]",
"at com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper.getCredentials(EC2ContainerCredentialsProviderWrapper.java:75) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:312) ~[?:?]",
"at org.elasticsearch.repositories.s3.SocketAccess.doPrivileged(SocketAccess.java:31) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3Service$PrivilegedInstanceProfileCredentialsProvider.getCredentials(S3Service.java:218) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1251) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:827) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:777) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680) ~[?:?]",

Hi all
Finally got it working.

After I add the credentials for access and secret key using this command.

kubectl exec -it <pod> -n <namespace> -- bash
echo <secret key>| bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
echo <access key>| bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key

Be sure to run this.

POST /_nodes/reload_secure_settings

Thats all I did. Hope it helped someone with the same issue

2 Likes

Thanks heaps for sharing your solution to this!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.