Link elasticsearch to S3

Hello,
My goal is to link my graylog/elasticsearch server to aws S3 to send indexes
After having already exchanged a lot on the elasticsearch forums, here is where I am:
-On AWS, I created a bucket mysiem
-I also created a user svc-graylog who has all the rights on this bucket

On my server:

  • I added the line s3.client.default.region: eu-west-3 in my file elasticsearch.yml
    -And I added the access key and the secret key corresponding to my svc-graylog user
    bin/elasticsearch-keystore add s3.client.default.access_key
    bin/elasticsearch-keystore add s3.client.default.secret_key

I would now like to create a repository using:

curl -PUT _snapshot/mysiem_repo
{
  "type": "s3",
  "settings": {
    "bucket": "mysiem"
  }
}

Except when I write the command curl -PUT _snapshot/mysiem_repo it sends me back:
curl: (6) Could not resolve host: _snapshot

Does anyone know how to help me? Thank you in advance

You need to add the hostname and the port like:

curl -XPUT http://localhost:9200/_snapshot/mysiem_repo -H 'Content-Type: application/json' -d'
{
  "type": "s3",
  "settings": {
    "bucket": "mysiem"
  }
}'

or run that from Kibana dev console:

PUT _snapshot/mysiem_repo
{
  "type": "s3",
  "settings": {
    "bucket": "mysiem"
  }
}

or use the snapshot management UI in Kibana.

Indeed thank you I had not seen the option "Copy as Curl"
Do I have to create a folder for snapshots (I may have forgotten a step), because here is the command I enter:

curl -X PUT "localhost:9200/_snapshot/my_s3_repository?pretty" -H 'Content-Type: application/json' -d'
{
  "type": "s3",
  "settings": {
    "bucket": "mysiem"
  }
}
'

And here is the return:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_verification_exception",
        "reason" : "[my_s3_repository] path  is not accessible on master node"
      }
    ],
    "type" : "repository_verification_exception",
    "reason" : "[my_s3_repository] path  is not accessible on master node",
    "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to upload object [tests-orzT2wNNR3WC4iya68C-yg/master.dat] using a single upload",
      "caused_by" : {
        "type" : "sdk_client_exception",
        "reason" : "Failed to connect to service endpoint: ",
        "caused_by" : {
          "type" : "socket_timeout_exception",
          "reason" : "Connect timed out"
        }
      }
    }
  },
  "status" : 500
}

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button.

Yes.

And also read S3 repository plugin | Elasticsearch Plugins and Integrations [8.11] | Elastic

Thanks for helping me
I created the folder /usr/share/elasticsearch/my_s3_repositoy in my server and yet I still have the same result:

 curl -X PUT "localhost:9200/_snapshot/my_s3_repository?pretty" -H 'Content-Type: application/json' -d'
    {
    "type": "s3",
    "settings": {
    "bucket": "mysiem"
    }
    }
    '
    {
      "error" : {
        "root_cause" : [
          {
            "type" : "repository_verification_exception",
            "reason" : "[my_s3_repository] path  is not accessible on master node"
          }
        ],
        "type" : "repository_verification_exception",
        "reason" : "[my_s3_repository] path  is not accessible on master node",
        "caused_by" : {
          "type" : "i_o_exception",
          "reason" : "Unable to upload object [tests-NfvoJoaLQBSaGUZybRKX4w/master.dat] using a single upload",
          "caused_by" : {
            "type" : "sdk_client_exception",
            "reason" : "Failed to connect to service endpoint: ",
            "caused_by" : {
              "type" : "socket_timeout_exception",
              "reason" : "Connect timed out"
            }
          }
        }
      },
      "status" : 500
    }

I also tried to put the name of the directory in my bucket instead of "my_s3_repository" but I have the same result

In addition, the permissions are good for me because the svc-graylog user has all the rights on the bucket, and I have well configured its access key and its private key on the server (as explained in the first post)

S3 repository is not meant for local folders.

What are you trying to do here?

I would like to send my old indexes in a my S3 storage, I don’t quite understand if my_s3_repository corresponds to a local folder or a folder in the bucket, because both sent me the same error

my_s3_repository is just a name that you are giving to the repository.

It can be underneath a S3 repo, a GDrive repo, a shared folder repo...

Ok.

  • So you created in AWS a S3 bucket which name is mysiem?
  • And then you also gave to that bucket the recommended S3 permissions?
  • And then you have at hand the right key/secret coming from AWS console?

If so, you are normally good to go. You need to store the key/secret in the secured settings like it's explained here, so basically:

bin/elasticsearch-keystore add s3.client.default.access_key
bin/elasticsearch-keystore add s3.client.default.secret_key

And then you can run:

PUT _snapshot/my_s3_repository
{
  "type": "s3",
  "settings": {
    "bucket": "mysiem"
  }
}

I did something a little bit different (so that’s probably where the problem comes from)

  • I have created in AWS a S3 bucket which name is mysiem
    -I have created in AWS a user wich name is svc-graylog
    -The permissions I put in the bucket are different from those that are recommended:

      {
          "Version": "2012-10-17",
          "Id": "Policy1618326484218",
          "Statement": [
              {
                  "Sid": "Stmt1618326455115",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::889564284383:user/svc-graylog"
                  },
                  "Action": "s3:*",
                  "Resource": "arn:aws:s3:::mysiem"
              }
    

Here the svc-graylog user has all permissions
the key/secret that I entered into my server are those of the svc-graylog user

So based on what you told me:
-Is svc-graylog required?
-Do I have to change the permissions to those that are recommended? (So no longer any link to my svc-graylog user)?
-If I no longer have the svc-graylog user, what is the right key/secret?

Thank you

Hi,

I am facing the same error when try to register S3 compatible storage as repository. API call return with "path is not accessible on master node"
following is my environment detail
ELK Version 7.12
S3 plugin version is 7.12 and install on all elastic node.

What I get from my Storage team is as below

endpoint url : cloudstorage.xxxx.xxxx
bucket name: identityverifylogs
access key
secret key

I added access key and secret key to all elastic nodes using following commands

bin/elasticsearch-keystore add s3.client.default.access_key
bin/elasticsearch-keystore add s3.client.default.secret_key

following is the API call that I use to register the S3 repository

PUT _snapshot/s3_repository?error_trace
{
  "type": "s3",
  "settings": {
    "bucket": "identityverifylogs",
    "client": "default",
    "endpoint": "cloudstorage.xxxx.xxxx"
  }
}

please not that endpoint url is accessible from all elastic node. I also verified provided credentials manually using S3 client code provided by storage team. but same credentials are not working with elastic s3 plugin.

following is the detail error return by API call

   {
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_verification_exception",
        "reason" : "[s3_repository] path  is not accessible on master node",
        "stack_trace" : "RepositoryVerificationException[[s3_repository] path  is not accessible on master node]; nested: IOException[Unable to upload object [tests-7f-YyDJZR_qwQLLUMvDyzQ/master.dat] using a single upload]; nested: NotSerializableExceptionWrapper[amazon_s3_exception: Not Allowed (Service: Amazon S3; Status Code: 405; Error Code: 405 Not Allowed; Request ID: null; S3 Extended Request ID: null)];\nCaused by: java.io.IOException: Unable to upload object [tests-7f-YyDJZR_qwQLLUMvDyzQ/master.dat] using a single upload\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:349)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$1(S3BlobContainer.java:122)\n\tat java.security.AccessController.doPrivileged(AccessController.java:554)\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:37)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:120)\n\tat org.elasticsearch.common.blobstore.BlobContainer.writeBlob(BlobContainer.java:116)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlobAtomic(S3BlobContainer.java:137)\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1293)\n\tat org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:349)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)\n\tat java.lang.Thread.run(Thread.java:832)\nCaused by: NotSerializableExceptionWrapper[amazon_s3_exception: Not Allowed (Service: Amazon S3; Status Code: 405; Error Code: 405 Not Allowed; Request ID: null; S3 Extended Request ID: null)]\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1383)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1359)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5054)\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5000)\n\tat com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:394)\n\tat com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:5942)\n\tat com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1808)\n\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1768)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$executeSingleUpload$18(S3BlobContainer.java:346)\n\tat org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:46)\n\tat java.security.AccessController.doPrivileged(AccessController.java:312)\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:45)\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.executeSingleUpload(S3BlobContainer.java:345)\n\t... 13 more\n"
      }

@qiratnahraf It would be better if you open your own thread for this and then add a link to this discussion if needed.

For now, I'd just like to highlight this part of the documentation: S3 repository plugin | Elasticsearch Plugins and Integrations [8.11] | Elastic

There are a number of storage systems that provide an S3-compatible API, and the repository-s3 plugin allows you to use these systems in place of AWS S3. To do so, you should set the s3.client.CLIENT_NAME.endpoint setting to the system’s endpoint. This setting accepts IP addresses and hostnames and may include a port. For example, the endpoint may be 172.17.0.2 or 172.17.0.2:9000 . You may also need to set s3.client.CLIENT_NAME.protocol to http if the endpoint does not support HTTPS.

Minio is an example of a storage system that provides an S3-compatible API. The repository-s3 plugin allows Elasticsearch to work with Minio-backed repositories as well as repositories stored on AWS S3. Other S3-compatible storage systems may also work with Elasticsearch, but these are not covered by the Elasticsearch test suite.

Note that some storage systems claim to be S3-compatible without correctly supporting the full S3 API. The repository-s3 plugin requires full compatibility with S3. In particular it must support the same set of API endpoints, return the same errors in case of failures, and offer a consistency model no weaker than S3’s when accessed concurrently by multiple nodes. Incompatible error codes and consistency models may be particularly hard to track down since errors and consistency failures are usually rare and hard to reproduce.

You can perform some basic checks of the suitability of your storage system using the repository analysis API. If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address any incompatibilities you encounter.

Hope this helps.

@dadoonet , thank you for your help, I just fixed the problem.

I filled in the access/secret key of my user aws, and I added the permissions of your doc with a small addition for the user svc-graylog (otherwise it did not work)

These are the two sections "Pincipal" below:

{
"Version": "2012-10-17",
"Statement": [
    {
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::889564284383:user/svc-graylog"
        },
        "Action": [
            "s3:ListBucket",
            "s3:GetBucketLocation",
            "s3:ListBucketMultipartUploads",
            "s3:ListBucketVersions"
        ],
        "Resource": "arn:aws:s3:::mysiem"
    },
    {
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::889564284383:user/svc-graylog"
        },
        "Action": [
            "s3:GetObject",
            "s3:PutObject",
            "s3:DeleteObject",
            "s3:AbortMultipartUpload",
            "s3:ListMultipartUploadParts"
        ],
        "Resource": "arn:aws:s3:::mysiem/*"
    }
]
}

Anyway thank you very much for your time and help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.