Repository verification exception when trying to create snapshot

version: 7.6.1
We are using the latest version of elasticsearch in kubernetes and the cluster is not able to verify the snapshot repository.

As you can see, I'm able to upload files to the bucket and I was also able to delete the file. This is not an IAM issue

[root@elasticsearch-es-masters-0 elasticsearch]# touch hello.txt
[root@elasticsearch-es-masters-0 elasticsearch]# aws s3 cp hello.txt s3://S3_BUCKET_NAME/
upload: ./hello.txt to s3://S3_BUCKET_NAME/hello.txt
[root@elasticsearch-es-masters-0 elasticsearch]# curl -u elastic:PASSWORD -X PUT "http://elasticsearch-es-http.elastic-system.svc.cluster.local:9200/_snapshot/s3_backup?pretty" -H 'Content-Type: application/json' -d'
> {
>   "type": "s3",
>   "settings": {
>     "bucket": "S3_BUCKET_NAME",
>     "max_restore_bytes_per_sec": "1gb",
>     "max_snapshot_bytes_per_sec": "1gb",
>     "storage_class": "standard_ia",
>     "base_path": "backups",
>     "buffer_size": "500mb",
>     "endpoint": "s3.us-west-2.amazonaws.com"
>   }
> }
>
> '
{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_verification_exception",
        "reason" : "[s3_backup] path [backups] is not accessible on master node"
      }
    ],
    "type" : "repository_verification_exception",
    "reason" : "[s3_backup] path [backups] is not accessible on master node",
    "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to upload object [backups/tests-8wcjM8cuSJGfjsFQg9YvQA/master.dat] using a single upload",
      "caused_by" : {
        "type" : "amazon_s3_exception",
        "reason" : "amazon_s3_exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 73F47A7189CB693C; S3 Extended Request ID: T19ifnSdMhsr8vbBZUAdxV3Yw6AxgTXiSxDHur/4KUf9FyZ+UmXW0+oYHj+4FCOGk4zty4ysJfA=)"
      }
    }
  },
  "status"

Errors on the logs:

{"type": "server", "timestamp": "2020-03-26T06:37:54,531Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "elasticsearch", "node.name": "elasticsearch-es-masters-0", "message": "path: /_snapshot/s3_backup/_verify, params: {repository=s3_backup}", "cluster.uuid": "xpPKgCYEQ4CjCCmiDLChCw", "node.id": "6s3TRh99TtSD0MxUW_radA" ,
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [elasticsearch-es-masters-2][10.19.192.230:9300][cluster:admin/repository/verify]",
"Caused by: org.elasticsearch.repositories.RepositoryVerificationException: [s3_backup] path [backups] is not accessible on master node",
"at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1041) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.repositories.RepositoriesService$3.doRun(RepositoriesService.java:246) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]",
"at java.lang.Thread.run(Thread.java:830) [?:?]",
"Caused by: java.io.IOException: Unable to upload object [backups/tests-s8Lj28kXTxqZqvl72knjGw/master.dat] using a single upload",
"at org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:323) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$0(S3BlobContainer.java:97) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:554) ~[?:?]",
"at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlobAtomic(S3BlobContainer.java:112) ~[?:?]",
"at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1036) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.repositories.RepositoriesService$3.doRun(RepositoriesService.java:246) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]",
"at java.lang.Thread.run(Thread.java:830) ~[?:?]",
"Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: amazon_s3_exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 585658CB87954FA3; S3 Extended Request ID: 86TwI68NW4vHP/jlmyXU+GhTR7R0KfmxcD9pHAIMnuGzbD2PFrP0K+rWWd8gnMi5Vdrr90PqvKM=)",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) ~[?:?]",
"at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) ~[?:?]",
"at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4926) ~[?:?]",
"at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4872) ~[?:?]",
"at com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:390) ~[?:?]",
"at com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:5806) ~[?:?]",
"at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1794) ~[?:?]",
"at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1754) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$executeSingleUpload$17(S3BlobContainer.java:320) ~[?:?]",
"at org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:57) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:312) ~[?:?]",
"at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:56) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:319) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$0(S3BlobContainer.java:97) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:554) ~[?:?]",
"at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95) ~[?:?]",
"at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlobAtomic(S3BlobContainer.java:112) ~[?:?]",
"at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1036) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.repositories.RepositoriesService$3.doRun(RepositoriesService.java:246) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.1.jar:7.6.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]",
"at java.lang.Thread.run(Thread.java:830) ~[?:?]"] }

Please let me know what's the issue

  • Thank you

All nodes in the cluster need access to the repository and the error message indicates at least one of the master nodes do not.

Hi,

Thank you for your reply.
I just SSHed into all 3 masters and I was able to upload a test file to the bucket

  • Thank you

FYI: we are using IAM roles for serviceAccount in EKS - https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html

any update on this please?

  • Thank you

any update here please?

  • Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.