Assume Role snapshot S3

Hi everybody,

How is it going?

Let's see if someone can help me with this. Sorry if the question is trivial (I am new in using elasticsearch)

I have a question regarding the use of the s3 repository plugin in order to automate de snapshot process of our elasticsearch cluster, specifically we are using the plugin version 7.17.6 along with a self-managed elasticsearch version 7.17.6 running on-premises.

We are trying to register an s3 repository

PUT _snapshot/s3_repository_example
{
    "type" : "s3",
    "settings" : {
      "client": "default",
      "bucket" : "bucket-name",
      "base_path": "snapshot",
      "storage_class" : "standard_ia",
      "endpoint": "https://s3.eu-south-2.amazonaws.com",
      "proxy.host": "proxy.example.local",
      "proxy.port": "4444",
      "region": "eu-south-2"
    }
}

and we are getting the following error.

 "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to upload object [snapshot/tests-aMqZ1xIiST2J32WPngfCyg/master.dat] using a single upload",
      "caused_by" : {
        "type" : "amazon_s3_exception",
        "reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: XXXXXXXXXXXX; S3 Extended Request ID: XXXXXXXXXXXXX)"
      }

Point that we have added the access_key and the secret_key to the the keystore in all nodes and restarted all of them but the problem persists.

I suspect that this behaviour might be caused by the fact that the credentials which we are using are linked to an IAM role and in order to grant the access to the bucket we need to provide the role to get the authorization.

Is there a way to assume an IAM role the same way as it is done in plugins like the input s3 plugin in logstash? or Is there any other way to do that?

s3 {
                id => "example-id"
                access_key_id => "acces-key"
                secret_access_key => "secret-key"
                role_session_name => "role-name"
                role_arn => "arn:aws:iam::XXXXXXXXXXXX:role/role-name"
                region => "region"
                bucket => "bucket-name"
                interval => 300
                additional_settings => {
                        force_path_style => true
                        follow_redirects => false
                }

I'll appreciate your help because we are stuck due to this issue.

Thanks in advance

Not directly within the S3 plugin. You would need to call the AssumeRole API yourself, extract the AccessKeyId, SecretAccessKey and SessionToken values from the response, insert them into the Elasticsearch keystore, and then call the reload secure settings API.

Many thanks for the quick response.

I will check it

Ok,

It works like a charm. The problem now is that the connection, obviously, deads when de sessiontoken dead.

Is there any way to refresh the keystore contents using the API or other method in order to automate the token refreshal process centraly?

Thanks again¡

Yes you'll need to look at the Expiration field in the response to the AssumeRole API and repeat the process some time before the credentials expire. And repeat for the next key and so on...

I believe the credentials normally expire after 12h (if obtained with long-term credentials) so an hourly rotation would be more than adequate.

Thst´s correct David,

But, actually I refer to the elasticsearch REST API, I was wondering if there is a way to refresh the elasticsearch keystore remotely without needing to perform the task manually in each node.

Thanks¡¡

I see, no, that's not possible. ES is not itself permitted to write to its own keystore for security reasons. You need to do this with some external process having sufficient privileges.

Ok,

I see, We will find a way to do that

thanks¡