Setup snapshots with minio, unable to retrieve secrets

Hi,

I'm trying to setup snapshot with a local minion instance, but I can't seem to imports secrets as documented here.

I setup the secrets with:

kubectl create secret generic minio-credentials --from-file=s3.client.default.access_key --from-file=s3.client.default.secret_key

Then added it to the elasticsearch yaml file:

secureSettings:
secretName: "minio-credentials"

I can see in the operator logs that its picked up:

{"level":"info","ts":1570195646.0524664,"logger":"license-validation","msg":"ValidationHandler handler called","operation":"CREATE","name":"**minio-credentials**","namespace":"default"}

{"level":"info","ts":1570195684.2096705,"logger":"es-validation","msg":"ValidationHandler handler called","operation":"UPDATE","name":"quickstart","namespace":"default"}

But when I setup via console

PUT /_snapshot/my_minio_repository

{
"type": "s3",
"settings": {
"bucket": "esbackups",
"endpoint": "10.1.1.220:9000",
"protocol": "http"
}
}

I get this error:

{

"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[my_minio_repository] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[my_minio_repository] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Unable to upload object [tests-86_qgPYEQ3KdDtlhf6kK2g/master.dat] using a single upload",
"caused_by": {
"type": "sdk_client_exception",
"reason": "sdk_client_exception: Unable to load credentials from service endpoint",
"caused_by": {
"type": "i_o_exception",
"reason": "Connect timed out"
}
}
}
},
"status": 500
}

Hi,

I am experiencing the same problem with wasabi on Elasticsearch 7.4. I set the access key and secret key in the elasticsearch keystore by hand:

bin/elasticsearch-keystore add s3.client.default.access_key
bin/elasticsearch-keystore add s3.client.default.secret_key

Registering a repository is done by

PUT _snapshot/my_wasabi_repository
{
"type": "s3",
"settings": {
"bucket": "es-backup",
"endpoint": "https://s3.eu-central-1.wasabisys.com",
"protocol": "https"
}
}

The response is the same as posted by Kent, repository verification failed.

This procedure was working on 7.2, right now I am not sure if the problem is down to the credentials in the keystore or if maybe some defaults in the s3 client changed in a way that affects non AWS S3 services.

Can someone see what is wrong or missing or maybe share experiences with repositories in non AWS S3 services for elasticsearch 7.4?

Thanks in advance!
Andrej

After trying to figure out what went wrong for quite a while I can actually confirm that the registration of the repository as posted is correct and working. In my case the problem was first a wrong usage of the keystore (rtfm!), after that we figured out that creating a key from a secret in kubernetes was not working correct with the helm chart of elastic (see github for reported bug concerning keys from secrets). After updating to elastic 7.4.1 this problem was solved and keys are created from secrets in a correct way.