Trying our in-house S3 compatible storage device as frozen tier, but seeing errors when verifying the repository. Got a confirmation from elastic support that the one we have is fully supported by S3 protocol.
Here are the things done so far:
Elastic v7.13 on RHEL
Made sure no n/w issue between elastic and the in-house S3 endpoint (residing in same DC)
Added access and secret keys to Elasticsearch keystore:
$bin/elasticsearch-keystore list
s3.client.es_s3.access_key
s3.client.es_s3.secret_key
Enabled the aws s3 plugin
Since it's https URL, added it's cert to default jdk/lib/security/cacerts truststore, and also to both Elasticsearch keystores (transport & http).
openssl connect using the crt file works fine
AWS client connection works fine when using the crt file and also when I add crt to a pem keystore.
eg: /usr/local/bin/aws s3 ls --endpoint-url=https://host:port s3://bucket1/ --ca-bundle $AWS_CA_BUNDLE
Where exactly should the client cert be added to for setting up the repo? It seems it's not able to utilize all the keystores/truststores available in Elastic install/config paths. Tried changing the "xpack.security.http.ssl.keystore.path" to point to pem format file instead of p12 but elastic wouldn't even start when I do so.
Here's the error when I try to run the PUT snapshot command:
# PUT /_snapshot/es_s3
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[es_s3] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[es_s3] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-CardaXhOS_KsrWSs-pSKcg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Failed to connect to service endpoint: ",
"caused_by" : {
"type" : "socket_timeout_exception",
"reason" : "Read timed out"
}
}
}
},
"status" : 500
}
If you have any suggestions, please let me know. Thanks for your time.
[2021-10-14T09:41:52,945][WARN ][r.suppressed ] [ingest1] path: /_snapshot/es_s3, params: {pretty=true, repository=es_s3}
org.elasticsearch.repositories.RepositoryVerificationException: [es_s3] path is not accessible on master node
Caused by: java.io.IOException: Unable to upload object [tests-WarEYldYSLyB-kW-08AebQ/master.dat] using a single upload
.....
Caused by: com.amazonaws.SdkClientException: Failed to connect to service endpoint:
.....
Caused by: java.net.SocketTimeoutException: Read timed out
I tried NOT to give a client name and instead used the default (with minimal expected configuration). Added the default access& secret keys to elastic keystore - however see this error when attempted to do the same:
org.elasticsearch.repositories.RepositoryVerificationException: [es_s3] path is not accessible on master node
Caused by: java.io.IOException: Unable to upload object [tests-bsxr8NkARKewFF0NPaoFwg/master.dat] using a single upload
...
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: bucket1.host
...
Caused by: java.net.UnknownHostException: bucket1.host
verified telnet from elastic to host on httpport works fine - so connectivity issue:
You can use paste bin, but @vee I would suggest Elastic Support to debug this. If you have support use it.
Also did you try to access your s3 directly with http? Is it locked Down to https?
Also you asked where the cert went
protocol
The protocol to use to connect to S3. Valid values are either http or https . Defaults to https . When using HTTPS, this plugin validates the repository’s certificate chain using the JVM-wide truststore. Ensure that the root certificate authority is in this truststore using the JVM’s keytool tool.
Thanks guys, I did engage elastic support couple weeks ago but because it's not a Sev 1 the response time is usually much slower than what's needed esp. when am trying things. This forum is surely much quicker to respond and so thought of trying it out here.
@zx8086: FYI - this is an in-house S3 compatible storage (not AWS cloud) and so am using what our system admins provided me to connect to it. It works when tried using aws cli from the same elastic node.
Interesting, I am infact using v7.13.2 on my end (my bad, I just mentioned the minor version earlier). Interesting that when you tried to connect - you don't see the bucket names prefix. Did you had to enable any additional permissions listed out at the time of plugin installation?
bin]$ ./elasticsearch-plugin install repository-s3
-> Installing repository-s3
-> Downloading repository-s3 from elastic
[=================================================] 100%
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission accessDeclaredMembers
* java.lang.RuntimePermission getClassLoader
* java.lang.reflect.ReflectPermission suppressAccessChecks
* java.net.SocketPermission * connect,resolve
* java.util.PropertyPermission es.allow_insecure_settings read,write
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.
Continue with installation? [y/N]y
-> Installed repository-s3
-> Please restart Elasticsearch to activate any plugins installed
@Stephen - yes, default JDK: install-dir/jdk. That's what both JAVA_HOME and ES_JAVA_HOME are pointing to. Tried to remove and re-install plugin from elastic root dir like you - still same behavior.
@zx8086 - didn't quite follow you - did you mean you had to add the java.policy file with those listed permissions while installing plugin? If so, can you share that java.policy file and it's path?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.