Failed to create S3 repository with ES 6.0

I used S3 repository with ES 5 fine without any issue. Used

curl -s -XPUT -d "{
              \"type\": \"s3\",
              \"settings\": {
                \"bucket\": \"BUCKET_NAME\",
                \"region\": \"us-west-2\",
                \"base_path\": \"elasticsearch/\",
                \"access_key\": \"$(cat /aws/access_key)\",
                \"secret_key\": \"$(cat /aws/secret_key)\",
                \"compress\": true,
                \"server_side_encryption\": true

In ES 6 now you need to put access_key and secret_key inside keystore, so now I do

cat /aws/access_key | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key
cat /aws/secret_key | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key

And after that

curl -s -XPUT -H "Content-Type: application/json" -d "{
                  \"type\": \"s3\",
                  \"settings\": {
                    \"bucket\": \"BUCKET_NAME\",
                    \"base_path\": \"elasticsearch6/\",
                    \"compress\": true,
                    \"server_side_encryption\": true

But that command fails with

{"error":{"root_cause":[{"type":"repository_exception","reason":"[s3] failed to create repository"}],"type":"repository_exception","reason":"[s3] failed to create repository","caused_by":{"type":"sdk_client_exception","reason":"Unable to load credentials from service endpoint","caused_by":{"type":"socket_timeout_exception","reason":"connect timed out"}}},"status":500}

Not sure if I am missing something or this is a bug.

Is there more logs like a stacktrace?

Could you change the log level?

@dadoonet yep, I have them, wanted to pasted it with the original question, but there is a limit for the message size.

this is a link to gist with what I see in logs

Apparently this is caused by:

Caused by: connect timed out

Are you running this curl call from an EC2 instance or locally from your laptop?

@dadoonet it is a server at home, btw I also enabled DEBUG logging and found

[2017-11-19T21:12:23,435][DEBUG][o.e.r.s.InternalAwsS3Service] [elasticsearch] creating S3 client with client_name [default], endpoint []
[2017-11-19T21:12:23,435][DEBUG][o.e.r.s.InternalAwsS3Service] [elasticsearch] Using instance profile credentials
[2017-11-19T21:12:23,439][DEBUG][c.a.s.s.AmazonS3Client   ] Bucket region cache doesn't have an entry for Trying to get bucket region from Amazon S3.

Which means my credentials aren't used.


# elasticsearch-keystore list  

I assume maybe bug somewhere in documentation? Do I need to do anything else to make these keystore records visible for S3 plugin?

@dadoonet looking on the code it does not seem like this code actually loads these values from keystore. It does tell you about deprecation, but keeps using the SecureSetting.insecureString, where as I understand it should use SecureSetting.secureString to have access to the keystore, right?

And I have another guess what can cause it (if I am not right about that this code does not read from the keystore).

With docker (in my case kubernetes) I keep only data folder as stateful. Also I add the keys into keystore after my ES is already started. So there are two issues:

  1. Looking on - seems like these settings should be set before you start ES.
  2. Because of Docker environments - keystore is stored near elasticsearch.yml file, which means that I do not keep it between restarts.

As a workaround I assume I can use es.allow_insecure_settings and use old way, but it does not seems like a documented flag, not sure where I can set it.

Secured Settings are read from this class

Could you try without Docker?

Yep, so I can confirm, that because these settings aren't dynamic you cannot modify them after you already started the ElasticSearch in docker. Considering that config folder is not a state folder, that makes it very problematic.

As a workaround I am pre-building the keystore in my image


RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3

RUN mkdir -p /aws

COPY access_key /aws/access_key
COPY secret_key /aws/secret_key

  cat /aws/access_key | base64 --decode | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key && \
  cat /aws/secret_key | base64 --decode | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key

RUN rm -fR /aws

But there are a lot of issues with that:

  1. Why elasticsearch docker image actually have a keystore prebuild in the image? Is that a security bug? I do not know how it is implemented, but assuming that all the users on this image basically can just steel the keystore and use in their own image?

  2. Now I am required to pass the AWS credentials on build step. And as I recall there are no secure way to do that with docker. Before 6 I used kubernetes security store to manage my secrets, which is a little bit more secure. Plus I don't need to store them in the keystore with the image.

@dadoonet Could you please pass this feedback to the people who maintain docker image? I can provide additional feedback if that will be required. And thank you for helping.

1 Like

Cc @rjernst

I think that in the future we will be able to update secured settings while the node is running but I’m not sure about the plan for this.

Thanks for sharing the workaround BTW. May be worth writing it in the secure settings documentation?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.