I've just upgraded our cluster from 5.6 to 6.1, everything is fine apart from accessing repositories in an s3 bucket.
I've migrated the access key and secret key settings to the keystore using these two options s3.client.default.access_key and s3.client.default.secret_key
Elasticsearch starts, I don't get any depreciation warnings but I can't access the buckets.
I can query the snapshot to get it's settings:
{
"s3": {
"type": "s3",
"settings": {
"bucket": "red-elasticsearch",
"endpoint": "eu-west-1",
"max_retries": "3",
"compress": "true"
}
}
}
If I try and list the snapshots in the repository I see
get /_snapshot/s3/*
{
"error": {
"root_cause": [
{
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 3C2DA71B68E48D55; S3 Extended Request ID: q5ljC2KE9pw7ImwkWSomvdm2u4sNYkg804NSTDrAi/zDhRfqWj7fnVlQiDbxIuzDhDYr4uivFRk=)"
}
],
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 3C2DA71B68E48D55; S3 Extended Request ID: q5ljC2KE9pw7ImwkWSomvdm2u4sNYkg804NSTDrAi/zDhRfqWj7fnVlQiDbxIuzDhDYr4uivFRk=)"
},
"status": 500
}
and if I try to update the settings in the repository or create a new one I see this:
Noted on the markup
I was trying to change the region to an endpoint setting as one of the posts I'd misread another post from an issue flagged on github.
Yes I did, hence me saying "I've tried it with both region and endpoint"
The documentation also states that it will be figured out based on the bucket location if it isn't specified, so I've tried removing the setting too.
Any changes to the snapshot repository, either creating a new one or trying to update the existing one result in:
If I try and get the snapshots from the repository:
get /_snapshot/s3/*
I receive this error:
{
"error": {
"root_cause": [
{
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 4E0B6C46C4F3171D; S3 Extended Request ID: WrQ3HB63/PQWOblmGWvSKejATBLYIOSpfDNwo9sHaM8aIRSiVBtdkDk/lKF2pO3zcx09KjOMOsE=)"
}
],
"type": "amazon_s3_exception",
"reason": "amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 4E0B6C46C4F3171D; S3 Extended Request ID: WrQ3HB63/PQWOblmGWvSKejATBLYIOSpfDNwo9sHaM8aIRSiVBtdkDk/lKF2pO3zcx09KjOMOsE=)"
},
"status": 500
}
I've tried new Access keys, I've tried the ones that used to work fine with the previous release and both give this error.
I've tried piping the access key in from a file, echo, or typing it manually. I can't see any method of seeing what is being passed to amazon.
The first error seems to indicate that the master node has not been able to write the master.dat-temp file or that all the nodes are not able to read that file.
Can you manually check if it exists in liv-elasticsearch bucket?
What is the output of GET _cat/nodes?v?
You can probably activate more traces for this plugin and those packages com.amazon and org.elasticsearch.repositories.s3.
I can't see a master.dat-temp in the bucket.
Just to check I've used S3 Browser with the Access keys for the bucket to ensure they have permissions still.
GET _cat/nodes?v shows:
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.20.3.103 15 99 31 0.55 0.37 0.35 mdi * L-AWSPELASTIC01
172.20.3.247 4 67 0 0.03 0.03 0.00 m - L-AWSPLOGSTASH01
172.20.3.57 64 99 35 0.57 0.48 0.42 mdi - L-AWSPELASTIC02
I've tried to enable trace logging for the two packages, I can't see anything of note that changes in the logs.
This is for querying the S3 index
[2018-01-09T15:55:36,496][WARN ][r.suppressed ] path: /_snapshot/s3/*, params: {repository=s3, snapshot=*}
org.elasticsearch.transport.RemoteTransportException: [L-AWSPELASTIC01][172.20.3.103:9300][cluster:admin/snapshot/get]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: amazon_s3_exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 0FB738108B42ED00; S3 Extended Request ID: uRRojSQzvbWik1RWCsFjvSOZk7uyRKKLZUPCPWfmuQ2JNLNfItrthCiiQwqC7z3NNkO4GaN8XM8=)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[?:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[?:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[?:?]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247) ~[?:?]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194) ~[?:?]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4188) ~[?:?]
at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:823) ~[?:?]
at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:798) ~[?:?]
at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$null$4(S3BlobContainer.java:139) ~[?:?]
at org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:57) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151]
at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:56) ~[?:?]
at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$listBlobsByPrefix$5(S3BlobContainer.java:131) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151]
at org.elasticsearch.repositories.s3.S3BlobContainer.listBlobsByPrefix(S3BlobContainer.java:128) ~[?:?]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:769) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:747) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:599) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:140) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:97) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:55) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:88) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:167) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.1.jar:6.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Yes, all 3 have the access and secret key set.
I'm just going through it again on the master to make sure, should I read anything into the endpoint being blank in the trace I posted in message 11?
The aws debug hasn't changed any log outputs when trying to modify the snapshot repository or view the snapshots
I've also tried it as root so no sudo required and I've tried it just using the elasticsearch-keystore add s3.client.default.access_key and then typing the key in!
The client used to connect to S3 has a number of settings available. Client setting names are of the form s3.client.CLIENT_NAME.SETTING_NAME and specified inside elasticsearch.yml.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.