{"error":{"root_cause":[{"type":"repository_exception","reason":"[s3] failed to create repository"}],"type":"repository_exception","reason":"[s3] failed to create repository","caused_by":{"type":"sdk_client_exception","reason":"Unable to load credentials from service endpoint","caused_by":{"type":"socket_timeout_exception","reason":"connect timed out"}}},"status":500}
Not sure if I am missing something or this is a bug.
@dadoonet it is a server at home, btw I also enabled DEBUG logging and found
[2017-11-19T21:12:23,435][DEBUG][o.e.r.s.InternalAwsS3Service] [elasticsearch] creating S3 client with client_name [default], endpoint []
[2017-11-19T21:12:23,435][DEBUG][o.e.r.s.InternalAwsS3Service] [elasticsearch] Using instance profile credentials
[2017-11-19T21:12:23,439][DEBUG][c.a.s.s.AmazonS3Client ] Bucket region cache doesn't have an entry for outcold.net-backup. Trying to get bucket region from Amazon S3.
Which means my credentials aren't used.
But
# elasticsearch-keystore list
keystore.seed
s3.client.default.access_key
s3.client.default.secret_key
I assume maybe bug somewhere in documentation? Do I need to do anything else to make these keystore records visible for S3 plugin?
And I have another guess what can cause it (if I am not right about that this code does not read from the keystore).
With docker (in my case kubernetes) I keep only data folder as stateful. Also I add the keys into keystore after my ES is already started. So there are two issues:
Because of Docker environments - keystore is stored near elasticsearch.yml file, which means that I do not keep it between restarts.
As a workaround I assume I can use es.allow_insecure_settings and use old way, but it does not seems like a documented flag, not sure where I can set it.
Yep, so I can confirm, that because these settings aren't dynamic you cannot modify them after you already started the ElasticSearch in docker. Considering that config folder is not a state folder, that makes it very problematic.
As a workaround I am pre-building the keystore in my image
FROM docker.elastic.co/elasticsearch/elasticsearch-basic:6.0.0
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3
RUN mkdir -p /aws
COPY access_key /aws/access_key
COPY secret_key /aws/secret_key
RUN \
cat /aws/access_key | base64 --decode | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key && \
cat /aws/secret_key | base64 --decode | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
RUN rm -fR /aws
But there are a lot of issues with that:
Why elasticsearch docker image actually have a keystore prebuild in the image? Is that a security bug? I do not know how it is implemented, but assuming that all the users on this image basically can just steel the keystore and use in their own image?
Now I am required to pass the AWS credentials on build step. And as I recall there are no secure way to do that with docker. Before 6 I used kubernetes security store to manage my secrets, which is a little bit more secure. Plus I don't need to store them in the keystore with the image.
@dadoonet Could you please pass this feedback to the people who maintain docker image? I can provide additional feedback if that will be required. And thank you for helping.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.