Hello, I make a repository that connect to storage with s3 protocol, after I declare the proxy.host, proxy.port, protocol, endpoint on Elasticsearch.yml, I also declare the access_key and secret key on Elasticsearch keystore.
Then I try to create the repository from kibana and verify the repository, here's what I got:
{ "error": { "root_cause": [ { "type": "repository_verification_exception", "reason": "[test] path is not accessible on master node" } ], "type": "repository_verification_exception", "reason": "[test] path is not accessible on master node", "caused_by": { "type": "i_o_exception", "reason": "Unable to upload object [tests-RpzMWcsuQzqYZEOqpQVR0Q/master.dat] using a single upload", "caused_by": { "type": "sdk_client_exception", "reason": "sdk_client_exception: Unable to execute HTTP request: ns-elasticlogmngt", "caused_by": { "type": "i_o_exception", "reason": "ns-elasticlogmngt" } } } }, "status": 500 }
I run my cluster on VM, and the s3 storage is cloud on premise. I've tried to test the connection between Elasticsearch and the s3 host (through telnet) and found no problem, so I dont think there is a connection problem on the thing.
I left the base path empty, but let me confirm to the storage team on this one
let me add more information, I don't use AWS S3 Repository, but I use cloud on premise storage using S3 protocol, I asked about using cloud on premise on elasticsearch , hopefully this is possible to do with, furthermore I can't use the the AWS CLI one because I don't use the AWS S3 storage
I can confirm that I install the s3 plugin on both node correctly
Hi,
I had nearly the same problem today.
Have you set the right plugin values in Elasticsearch.yml?
I had to restart the nodes after setting these values.
hello @saschahenke thanks for the answer,
should we set the client_name to "default" or can we set it to another else?
because I set "s3.client.hcp.endpoint" etc because I have hcp as the client name
and the path_style_access parameter, should I set it to true or false?
I do not have an easy answer for you ... not all on prem S3 services are supported..
Also a proxy in the middle can add complications
I am not sure which command that error is from above it helps when you show the command and then the error....
What I would do is set the log tracing on as in the other thread and look closely for the error messages that might help. You should be able to see the exact endpoint / protocol it is trying to connect to.
thanks for further answer
sorry if I can't give proper explanation about the error on my side,
I tried to turn on debug and trace on the side, here's what I got
put repository [test]
using bucket [ns-elasticlogmngt], chunk_size [5tb], server_side_encryption [false], buffer_size [99mb], cannedACL [], storageClass []
using endpoint [https://endpoint.co.id] and region [null]
Using basic key/secret credentials
Configuring Proxy. Proxy Host: ns-elasticlogmngt Proxy Port: 443
Unable to load configuration from com.amazonaws.monitoring.SystemPropertyCsmConfigurationProvider@49e52110: Unable to load Client Side Monitoring configurations from system properties variables!
Unable to load configuration from com.amazonaws.monitoring.EnvironmentVariableCsmConfigurationProvider@4d74451e: Unable to load Client Side Monitoring configurations from environment variables!
Unable to load configuration from com.amazonaws.monitoring.ProfileCsmConfigurationProvider@cb096b5: Unable to load config file
AWS4 String to Sign: '"AWS4-HMAC-SHA256
20211104T013443Z
20211104/us-east-1/s3/aws4_request
11754d46458a7f1395ce440708e1fce00c5655b6108298c8684d0ade1f377f01"
Yes, I tried the curl, it is connected to s3,
here's the command I used
curl --location --request GET 'https://ns-elasticlogmngt.endpoint.co.id/hs3' \ --header 'Authorization: access_key:secret_key'
this one.. I don't know why the request has whitespace in it (I don't know how to delete it), you can clearly see that https://endpoint.co.id /ns-elasticlogmngt has a whitespace, I think the problem is on this, thats why the request couldn't get through because the link is not proper. Still I don't know is this the real problem behind it or anything else
When looking at your curl query that does not look like path_style_access: true to me I am not an expert but that is the non-path style, paths style appends the bucket to the url and non path default pre-pends it. which is what you show in your curl.
perhaps try setting that back to false...
s3.client.default.path_style_access: false
I am at about the end of my understanding on this... I do know not all on prem s3 are supported.
I tried to delete host and use endpoint and bucket only, but now I have certificate problem, I think its because https protocol. Is there any documentation about the https certificate?
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[test_s3] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[test_s3] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Unable to upload object [tests-smp3lM-mSvCNNBal5IRRSg/master.dat] using a single upload",
"caused_by": {
"type": "sdk_client_exception",
"reason": "sdk_client_exception: Unable to execute HTTP request: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
"caused_by": {
"type": "i_o_exception",
"reason": "sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
"caused_by": {
"type": "validator_exception",
"reason": "validator_exception: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
"caused_by": {
"type": "sun_cert_path_builder_exception",
"reason": "sun_cert_path_builder_exception: unable to find valid certification path to requested target"
}
}
}
}
}
},
"status": 500
}
maybe I will try http for now, next I'll try to change the endpoint
I also try to contact my unix team to help me with the cert, if issues still persist, I think I will conclude this case, maybe my prem s3 is not compatible enough
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.