Issue with s3 repository

I've just upgraded our cluster from 5.6 to 6.1, everything is fine apart from accessing repositories in an s3 bucket.

I've migrated the access key and secret key settings to the keystore using these two options
s3.client.default.access_key and s3.client.default.secret_key

Elasticsearch starts, I don't get any depreciation warnings but I can't access the buckets.
I can query the snapshot to get it's settings:
{
"s3": {
"type": "s3",
"settings": {
"bucket": "red-elasticsearch",
"endpoint": "eu-west-1",
"max_retries": "3",
"compress": "true"
}
}
}

If I try and list the snapshots in the repository I see

get /_snapshot/s3/*

{
"error": {
"root_cause": [
{
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 3C2DA71B68E48D55; S3 Extended Request ID: q5ljC2KE9pw7ImwkWSomvdm2u4sNYkg804NSTDrAi/zDhRfqWj7fnVlQiDbxIuzDhDYr4uivFRk=)"
}
],
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 3C2DA71B68E48D55; S3 Extended Request ID: q5ljC2KE9pw7ImwkWSomvdm2u4sNYkg804NSTDrAi/zDhRfqWj7fnVlQiDbxIuzDhDYr4uivFRk=)"
},
"status": 500
}

and if I try to update the settings in the repository or create a new one I see this:

{
"error": {
"root_cause": [
{
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: BA23BA4A724B0021; S3 Extended Request ID: RuVxL+nP1ZZrnDL/WeFV48U7kNUkVM6RKsS0CHkAbzdIlmJq+duuzB+DR/DGMCjn+pu8j7UAKss=)"
}
],
"type": "blob_store_exception",
"reason": "Failed to check if blob [master.dat-temp] exists",
"caused_by": {
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: BA23BA4A724B0021; S3 Extended Request ID: RuVxL+nP1ZZrnDL/WeFV48U7kNUkVM6RKsS0CHkAbzdIlmJq+duuzB+DR/DGMCjn+pu8j7UAKss=)"
}
},
"status": 500
}

The access keys worked fine when in the yml file but not in the keystore.

Please format your code using </> icon as explained in this guide. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is incorrect:

"endpoint": "eu-west-1",

You need to read https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository-s3-client.html (see endpoint)

Noted on the markup
I was trying to change the region to an endpoint setting as one of the posts I'd misread another post from an issue flagged on github.

The snapshot settings are:

    {
  "s3": {
    "type": "s3",
    "settings": {
      "bucket": "liv-elasticsearch",
      "max_retries": "3",
      "region": "eu-west-1",
      "compress": "true"
    }
  }
}

I've tried it with both region and endpoint specified and any query or attempt to use and I still get this error

reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403;

Did you read the documentation I linked to?

There is no region setting. So please use endpoint.

Yes I did, hence me saying "I've tried it with both region and endpoint"
The documentation also states that it will be figured out based on the bucket location if it isn't specified, so I've tried removing the setting too.

Any changes to the snapshot repository, either creating a new one or trying to update the existing one result in:

{
  "error": {
    "root_cause": [
      {
        "type": "amazon_s3_exception",
        "reason": "amazon_s3_exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 2870298C14F596D1; S3 Extended Request ID: gijUYZ3nlM9XafCvB/tcLAPrBt4N6UuvRAXkCpaxOdikKv3ee/8vn625MZscvCBkfpwJC5cW4sM=)"
      }
    ],
    "type": "blob_store_exception",
    "reason": "Failed to check if blob [master.dat-temp] exists",
    "caused_by": {
      "type": "amazon_s3_exception",
      "reason": "amazon_s3_exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 2870298C14F596D1; S3 Extended Request ID: gijUYZ3nlM9XafCvB/tcLAPrBt4N6UuvRAXkCpaxOdikKv3ee/8vn625MZscvCBkfpwJC5cW4sM=)"
    }
  },
  "status": 500
}

As an aside, region autocompletes in the devtools interface, endpoint does not.

Can you share the full settings you passed when you used endpoint please?

The documentation also states that it will be figured out based on the bucket location if it isn't specified, so I've tried removing the setting too.

Yeah. This is a bug. See

This is the configuration I used to configure the endpoint:

PUT /_snapshot/s3
{
  "type": "s3",
    "settings": {
      "bucket": "liv-elasticsearch",
      "compress": "true",
      "endpoint": "s3.eu-west-1.amazonaws.com",
      "max_retries": 3
    }
}

Again when configuring it I receive:

{
  "error": {
    "root_cause": [
      {
        "type": "amazon_s3_exception",
        "reason": "amazon_s3_exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 92B995C2907448C6; S3 Extended Request ID: jvwcNK1adHXiLmm6nv9dkFkCyNEeLhwz4x0WJlG3bhDvYTvPP+t2IH8a/f7NQ3SJcfF4xSvvclw=)"
      }
    ],
    "type": "blob_store_exception",
    "reason": "Failed to check if blob [master.dat-temp] exists",
    "caused_by": {
      "type": "amazon_s3_exception",
      "reason": "amazon_s3_exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 92B995C2907448C6; S3 Extended Request ID: jvwcNK1adHXiLmm6nv9dkFkCyNEeLhwz4x0WJlG3bhDvYTvPP+t2IH8a/f7NQ3SJcfF4xSvvclw=)"
    }
  },
  "status": 500
}

If I try and get the snapshots from the repository:

get /_snapshot/s3/*

I receive this error:

{
  "error": {
    "root_cause": [
      {
        "type": "amazon_s3_exception",
        "reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 4E0B6C46C4F3171D; S3 Extended Request ID: WrQ3HB63/PQWOblmGWvSKejATBLYIOSpfDNwo9sHaM8aIRSiVBtdkDk/lKF2pO3zcx09KjOMOsE=)"
      }
    ],
    "type": "amazon_s3_exception",
    "reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 4E0B6C46C4F3171D; S3 Extended Request ID: WrQ3HB63/PQWOblmGWvSKejATBLYIOSpfDNwo9sHaM8aIRSiVBtdkDk/lKF2pO3zcx09KjOMOsE=)"
  },
  "status": 500
}

I've tried new Access keys, I've tried the ones that used to work fine with the previous release and both give this error.
I've tried piping the access key in from a file, echo, or typing it manually. I can't see any method of seeing what is being passed to amazon.

The first error seems to indicate that the master node has not been able to write the master.dat-temp file or that all the nodes are not able to read that file.

Can you manually check if it exists in liv-elasticsearch bucket?

What is the output of GET _cat/nodes?v?

You can probably activate more traces for this plugin and those packages com.amazon and org.elasticsearch.repositories.s3.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/logging.html#configuring-logging-levels

I can't see a master.dat-temp in the bucket.
Just to check I've used S3 Browser with the Access keys for the bucket to ensure they have permissions still.

GET _cat/nodes?v shows:

ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.20.3.103           15          99  31    0.55    0.37     0.35 mdi       *      L-AWSPELASTIC01
172.20.3.247            4          67   0    0.03    0.03     0.00 m         -      L-AWSPLOGSTASH01
172.20.3.57            64          99  35    0.57    0.48     0.42 mdi       -      L-AWSPELASTIC02

I've tried to enable trace logging for the two packages, I can't see anything of note that changes in the logs.
This is for querying the S3 index

[2018-01-09T15:55:36,496][WARN ][r.suppressed             ] path: /_snapshot/s3/*, params: {repository=s3, snapshot=*}
org.elasticsearch.transport.RemoteTransportException: [L-AWSPELASTIC01][172.20.3.103:9300][cluster:admin/snapshot/get]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 0FB738108B42ED00; S3 Extended Request ID: uRRojSQzvbWik1RWCsFjvSOZk7uyRKKLZUPCPWfmuQ2JNLNfItrthCiiQwqC7z3NNkO4GaN8XM8=)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[?:?]
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[?:?]
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247) ~[?:?]
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194) ~[?:?]
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4188) ~[?:?]
        at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:823) ~[?:?]
        at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:798) ~[?:?]
        at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$null$4(S3BlobContainer.java:139) ~[?:?]
        at org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:57) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151]
        at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:56) ~[?:?]
        at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$listBlobsByPrefix$5(S3BlobContainer.java:131) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151]
        at org.elasticsearch.repositories.s3.S3BlobContainer.listBlobsByPrefix(S3BlobContainer.java:128) ~[?:?]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:769) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:747) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:599) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:140) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:97) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:55) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:88) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:167) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) ~[elasticsearch-6.1.1.jar:6.1.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.1.jar:6.1.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

Updating the settings shows this:

[2018-01-09T15:55:42,405][DEBUG][o.e.r.s.S3Repository     ] [L-AWSPELASTIC01] using bucket [liv-elasticsearch], chunk_size [1gb], server_side_encryption                                                    [false], buffer_size [100mb], cannedACL [], storageClass []
[2018-01-09T15:55:42,405][DEBUG][o.e.r.s.InternalAwsS3Service] [L-AWSPELASTIC01] creating S3 client with client_name [default], endpoint []
[2018-01-09T15:55:42,405][DEBUG][o.e.r.s.InternalAwsS3Service] [L-AWSPELASTIC01] Using basic key/secret credentials
[2018-01-09T15:55:42,431][INFO ][o.e.r.RepositoriesService] [L-AWSPELASTIC01] update repository [s3]

Sorry. Package name for AWS should have been com.amazonaws.

Did you set the client credentials on both 3 nodes?

Yes, all 3 have the access and secret key set.
I'm just going through it again on the master to make sure, should I read anything into the endpoint being blank in the trace I posted in message 11?

The aws debug hasn't changed any log outputs when trying to modify the snapshot repository or view the snapshots

should I read anything into the endpoint being blank in the trace I posted in message 11?

That's weird indeed.

How exactly do you create the key/secret settings? Can you paste the command you are using? Of course don't paste any password here.

I've tried like this:

echo AWS_ACCESS_KEY |sudo bin/elasticsearch-keystore add --stdin s3.client.default.access_key && \
echo AWS_SECRET_KEY | sudo bin/elasticsearch-keystore add --stdin s3.client.default.secret_key

I've also tried it as root so no sudo required and I've tried it just using the elasticsearch-keystore add s3.client.default.access_key and then typing the key in!

Hmmm. I think I understand.

The endpoint should be set as a setting in elasticsearch.yml according to S3 repository plugin | Elasticsearch Plugins and Integrations [8.11] | Elastic

The client used to connect to S3 has a number of settings available. Client setting names are of the form s3.client.CLIENT_NAME.SETTING_NAME and specified inside elasticsearch.yml.

So add this:

s3.client.default.endpoint: s3.eu-west-1.amazonaws.com

And restart all nodes.

And remove it from the s3 repository definition:

PUT /_snapshot/s3
{
  "type": "s3",
    "settings": {
      "bucket": "liv-elasticsearch",
      "compress": "true",
      "max_retries": 3
    }
}

There is no endpoint in repository settings: S3 repository plugin | Elasticsearch Plugins and Integrations [8.11] | Elastic

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.