Hello,
My goal is to link my graylog/elasticsearch server to aws S3 to send indexes
After having already exchanged a lot on the elasticsearch forums, here is where I am:
-On AWS, I created a bucket mysiem
-I also created a user svc-graylog who has all the rights on this bucket
On my server:
I added the line s3.client.default.region: eu-west-3 in my file elasticsearch.yml
-And I added the access key and the secret key corresponding to my svc-graylog user bin/elasticsearch-keystore add s3.client.default.access_key bin/elasticsearch-keystore add s3.client.default.secret_key
Indeed thank you I had not seen the option "Copy as Curl"
Do I have to create a folder for snapshots (I may have forgotten a step), because here is the command I enter:
Thanks for helping me
I created the folder /usr/share/elasticsearch/my_s3_repositoy in my server and yet I still have the same result:
curl -X PUT "localhost:9200/_snapshot/my_s3_repository?pretty" -H 'Content-Type: application/json' -d'
{
"type": "s3",
"settings": {
"bucket": "mysiem"
}
}
'
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[my_s3_repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[my_s3_repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-NfvoJoaLQBSaGUZybRKX4w/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Failed to connect to service endpoint: ",
"caused_by" : {
"type" : "socket_timeout_exception",
"reason" : "Connect timed out"
}
}
}
},
"status" : 500
}
I also tried to put the name of the directory in my bucket instead of "my_s3_repository" but I have the same result
In addition, the permissions are good for me because the svc-graylog user has all the rights on the bucket, and I have well configured its access key and its private key on the server (as explained in the first post)
I would like to send my old indexes in a my S3 storage, I don’t quite understand if my_s3_repository corresponds to a local folder or a folder in the bucket, because both sent me the same error
I did something a little bit different (so that’s probably where the problem comes from)
I have created in AWS a S3 bucket which name is mysiem
-I have created in AWS a user wich name is svc-graylog
-The permissions I put in the bucket are different from those that are recommended:
Here the svc-graylog user has all permissions
the key/secret that I entered into my server are those of the svc-graylog user
So based on what you told me:
-Is svc-graylog required?
-Do I have to change the permissions to those that are recommended? (So no longer any link to my svc-graylog user)?
-If I no longer have the svc-graylog user, what is the right key/secret?
I am facing the same error when try to register S3 compatible storage as repository. API call return with "path is not accessible on master node"
following is my environment detail
ELK Version 7.12
S3 plugin version is 7.12 and install on all elastic node.
please not that endpoint url is accessible from all elastic node. I also verified provided credentials manually using S3 client code provided by storage team. but same credentials are not working with elastic s3 plugin.
following is the detail error return by API call
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3_repository] path is not accessible on master node",
"stack_trace" : "RepositoryVerificationException[[s3_repository] path is not accessible on master node]; nested: IOException[Unable to upload object [tests-7f-YyDJZR_qwQLLUMvDyzQ/master.dat] using a single upload]; nested: NotSerializableExceptionWrapper[amazon_s3_exception: Not Allowed (Service: Amazon S3; Status Code: 405; Error Code: 405 Not Allowed; Request ID: null; S3 Extended Request ID: null)];\nCaused by: java.io.IOException: Unable to upload object [tests-7f-YyDJZR_qwQLLUMvDyzQ/master.dat] using a single upload\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:349)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$1(S3BlobContainer.java:122)\n\tat java.security.AccessController.doPrivileged(AccessController.java:554)\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:37)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:120)\n\tat org.elasticsearch.common.blobstore.BlobContainer.writeBlob(BlobContainer.java:116)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlobAtomic(S3BlobContainer.java:137)\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1293)\n\tat org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:349)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)\n\tat java.lang.Thread.run(Thread.java:832)\nCaused by: NotSerializableExceptionWrapper[amazon_s3_exception: Not Allowed (Service: Amazon S3; Status Code: 405; Error Code: 405 Not Allowed; Request ID: null; S3 Extended Request ID: null)]\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1383)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1359)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5054)\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5000)\n\tat com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:394)\n\tat com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:5942)\n\tat com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1808)\n\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1768)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$executeSingleUpload$18(S3BlobContainer.java:346)\n\tat org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:46)\n\tat java.security.AccessController.doPrivileged(AccessController.java:312)\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:45)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:345)\n\t... 13 more\n"
}
There are a number of storage systems that provide an S3-compatible API, and the repository-s3 plugin allows you to use these systems in place of AWS S3. To do so, you should set the s3.client.CLIENT_NAME.endpoint setting to the system’s endpoint. This setting accepts IP addresses and hostnames and may include a port. For example, the endpoint may be 172.17.0.2 or 172.17.0.2:9000 . You may also need to set s3.client.CLIENT_NAME.protocol to http if the endpoint does not support HTTPS.
Minio is an example of a storage system that provides an S3-compatible API. The repository-s3 plugin allows Elasticsearch to work with Minio-backed repositories as well as repositories stored on AWS S3. Other S3-compatible storage systems may also work with Elasticsearch, but these are not covered by the Elasticsearch test suite.
Note that some storage systems claim to be S3-compatible without correctly supporting the full S3 API. The repository-s3 plugin requires full compatibility with S3. In particular it must support the same set of API endpoints, return the same errors in case of failures, and offer a consistency model no weaker than S3’s when accessed concurrently by multiple nodes. Incompatible error codes and consistency models may be particularly hard to track down since errors and consistency failures are usually rare and hard to reproduce.
You can perform some basic checks of the suitability of your storage system using the repository analysis API. If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address any incompatibilities you encounter.
@dadoonet , thank you for your help, I just fixed the problem.
I filled in the access/secret key of my user aws, and I added the permissions of your doc with a small addition for the user svc-graylog (otherwise it did not work)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.