Yep, so I can confirm, that because these settings aren't dynamic you cannot modify them after you already started the ElasticSearch in docker. Considering that config folder is not a state folder, that makes it very problematic.
As a workaround I am pre-building the keystore in my image
FROM docker.elastic.co/elasticsearch/elasticsearch-basic:6.0.0
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3
RUN mkdir -p /aws
COPY access_key /aws/access_key
COPY secret_key /aws/secret_key
RUN \
cat /aws/access_key | base64 --decode | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key && \
cat /aws/secret_key | base64 --decode | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
RUN rm -fR /aws
But there are a lot of issues with that:
-
Why elasticsearch docker image actually have a keystore prebuild in the image? Is that a security bug? I do not know how it is implemented, but assuming that all the users on this image basically can just steel the keystore and use in their own image?
-
Now I am required to pass the AWS credentials on build step. And as I recall there are no secure way to do that with docker. Before 6 I used kubernetes security store to manage my secrets, which is a little bit more secure. Plus I don't need to store them in the keystore with the image.
@dadoonet Could you please pass this feedback to the people who maintain docker image? I can provide additional feedback if that will be required. And thank you for helping.