Error i_o_exception on s3 repository

Hello, I make a repository that connect to storage with s3 protocol, after I declare the proxy.host, proxy.port, protocol, endpoint on Elasticsearch.yml, I also declare the access_key and secret key on Elasticsearch keystore.
Then I try to create the repository from kibana and verify the repository, here's what I got:

{ "error": { "root_cause": [ { "type": "repository_verification_exception", "reason": "[test] path is not accessible on master node" } ], "type": "repository_verification_exception", "reason": "[test] path is not accessible on master node", "caused_by": { "type": "i_o_exception", "reason": "Unable to upload object [tests-RpzMWcsuQzqYZEOqpQVR0Q/master.dat] using a single upload", "caused_by": { "type": "sdk_client_exception", "reason": "sdk_client_exception: Unable to execute HTTP request: ns-elasticlogmngt", "caused_by": { "type": "i_o_exception", "reason": "ns-elasticlogmngt" } } } }, "status": 500 }

I run my cluster on VM, and the s3 storage is cloud on premise. I've tried to test the connection between Elasticsearch and the s3 host (through telnet) and found no problem, so I dont think there is a connection problem on the thing.

Any help is much appreciated
thanks in advance

Hello, @alfianaf Welcome to the community!

some ideas to trace:

  1. did you configure the repository Base path correctly? if it is top-level in the bucket it should be left empty.
  2. did you test sample upload using AWS CLI and access-key and secret you configured in your elastic key store?
  3. did you install the s3 plugin on both master and data nodes?
  4. did you reload security settings after keystore configuration? POST _nodes/reload_secure_settings

Hello @Ayd_Asraf thanks for the answer,

  1. I left the base path empty, but let me confirm to the storage team on this one
  2. let me add more information, I don't use AWS S3 Repository, but I use cloud on premise storage using S3 protocol, I asked about using cloud on premise on elasticsearch , hopefully this is possible to do with, furthermore I can't use the the AWS CLI one because I don't use the AWS S3 storage
  3. I can confirm that I install the s3 plugin on both node correctly
  4. I did

again, thanks in advance

Hi,
I had nearly the same problem today.
Have you set the right plugin values in Elasticsearch.yml?
I had to restart the nodes after setting these values.

s3.client.default.endpoint:
s3.client.default.path_style_access:
s3.client.default.protocol:

We use StorageGRID as S3 compatible storage engine and with these settings it works.

Hope it helps,
Sascha

1 Like

hello @saschahenke thanks for the answer,
should we set the client_name to "default" or can we set it to another else?
because I set "s3.client.hcp.endpoint" etc because I have hcp as the client name

and the path_style_access parameter, should I set it to true or false?

Thanks in advance as always

Hi @alfianaf

Here is rather a lengthy thread on S3 On Prem perhaps there will some help

In the End it was the s3.client.default.path_style_access parameter that needed to be set.

hi @stephenb thanks for the answer,
I think I've set up all of the parameter,

s3.client.default.proxy.host: ns-elasticlogmngt
s3.client.default.proxy.port: 443
s3.client.default.protocol: https
s3.client.default.endpoint: endpoint.co.id
s3.client.default.path_style_access: true

the error is still the same

{
  "error": {
    "root_cause": [
      {
        "type": "repository_verification_exception",
        "reason": "[test] path  is not accessible on master node"
      }
    ],
    "type": "repository_verification_exception",
    "reason": "[test] path  is not accessible on master node",
    "caused_by": {
      "type": "i_o_exception",
      "reason": "Unable to upload object [tests-tk3EYLO7Tx6wuH9Hg-g5lw/master.dat] using a single upload",
      "caused_by": {
        "type": "sdk_client_exception",
        "reason": "sdk_client_exception: Unable to execute HTTP request: ns-elasticlogmngt",
        "caused_by": {
          "type": "i_o_exception",
          "reason": "ns-elasticlogmngt"
        }
      }
    }
  },
  "status": 500
}

I've also set the s3.client.default.access_key and s3.client.default.secret_key on the Elasticsearch-keystore

I do not have an easy answer for you ... not all on prem S3 services are supported..

Also a proxy in the middle can add complications

I am not sure which command that error is from above it helps when you show the command and then the error....

What I would do is set the log tracing on as in the other thread and look closely for the error messages that might help. You should be able to see the exact endpoint / protocol it is trying to connect to.

thanks for further answer
sorry if I can't give proper explanation about the error on my side,
I tried to turn on debug and trace on the side, here's what I got

put repository [test]
using bucket [ns-elasticlogmngt], chunk_size [5tb], server_side_encryption [false], buffer_size [99mb], cannedACL [], storageClass []
using endpoint [https://endpoint.co.id] and region [null]
Using basic key/secret credentials
Configuring Proxy. Proxy Host: ns-elasticlogmngt Proxy Port: 443
Unable to load configuration from com.amazonaws.monitoring.SystemPropertyCsmConfigurationProvider@49e52110: Unable to load Client Side Monitoring configurations from system properties variables!
Unable to load configuration from com.amazonaws.monitoring.EnvironmentVariableCsmConfigurationProvider@4d74451e: Unable to load Client Side Monitoring configurations from environment variables!
Unable to load configuration from com.amazonaws.monitoring.ProfileCsmConfigurationProvider@cb096b5: Unable to load config file
AWS4 String to Sign: '"AWS4-HMAC-SHA256
20211104T013443Z
20211104/us-east-1/s3/aws4_request
11754d46458a7f1395ce440708e1fce00c5655b6108298c8684d0ade1f377f01"
AWS4 Canonical Request: '"PUT
/ns-elasticlogmngt/tests-CoBM1rZWTvCjFiQr5iR3Lg/master.dat

amz-sdk-invocation-id:f9390c3d-54ed-b91a-a687-bd9c2af73a97
amz-sdk-retry:0/0/500
content-length:22
content-type:application/octet-stream
Sending Request: PUT https://endpoint.co.id /ns-elasticlogmngt/tests-CoBM1rZWTvCjFiQr5iR3Lg/master.dat Headers: (amz-sdk-invocation-id: f9390c3d-54ed-b91a-a687-bd9c2af73a97, Content-Length: 22, Content-Type: application/octet-stream, User-Agent: aws-sdk-java/1.11.749 Linux/3.10.0-1160.el7.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.144-b01 java/1.8.0_144 vendor/Oracle_Corporation, x-amz-acl: private, x-amz-storage-class: STANDARD, )
Retriable error detected, will retry in 72ms, attempt number: 0

and it loops for several times

So you have not actually showed all the configuration and the command you are running to mount the repository.

Can you show your exact configuration in the Elasticsearch.yml and the exact command to register the repo that resulted in that error and log lines.

AND the s3 bucket and path can be reached curl from the Elasticsearch host via the proxy?

Can you try without the proxy seems like an added variable.

This is interesting somehow does not look right to me..

this is the configuration I set on my Elasticsearch.yml

s3.client.default.proxy.host: ns-elasticlogmngt
s3.client.default.proxy.port: 443
s3.client.default.protocol: https
s3.client.default.endpoint: endpoint.co.id
s3.client.default.path_style_access: true

the command to register the repo

PUT _snapshot/test
{
  "type": "s3",
  "settings": {
    "bucket": "ns-elasticlogmngt"
  }
}

Yes, I tried the curl, it is connected to s3,
here's the command I used

curl --location --request GET 'https://ns-elasticlogmngt.endpoint.co.id/hs3' \ --header 'Authorization: access_key:secret_key'

this one.. I don't know why the request has whitespace in it (I don't know how to delete it), you can clearly see that https://endpoint.co.id /ns-elasticlogmngt has a whitespace, I think the problem is on this, thats why the request couldn't get through because the link is not proper. Still I don't know is this the real problem behind it or anything else

When looking at your curl query that does not look like path_style_access: true to me I am not an expert but that is the non-path style, paths style appends the bucket to the url and non path default pre-pends it. which is what you show in your curl.

perhaps try setting that back to false...

s3.client.default.path_style_access: false

I am at about the end of my understanding on this... I do know not all on prem s3 are supported.

since I set the parameter on Elasticsearch.yml, can I overwrite it on parameter when I call an API?

PUT _snapshot/test_s3
{
  "type": "s3",
  "settings": {
    "bucket": "ns-elasticlogmngt",
    "path_style_access": false
  }
}

because I have to restart the cluster if I want to change the Elasticsearch.yml

I tried to delete host and use endpoint and bucket only, but now I have certificate problem, I think its because https protocol. Is there any documentation about the https certificate?

{
  "error": {
    "root_cause": [
      {
        "type": "repository_verification_exception",
        "reason": "[test_s3] path  is not accessible on master node"
      }
    ],
    "type": "repository_verification_exception",
    "reason": "[test_s3] path  is not accessible on master node",
    "caused_by": {
      "type": "i_o_exception",
      "reason": "Unable to upload object [tests-smp3lM-mSvCNNBal5IRRSg/master.dat] using a single upload",
      "caused_by": {
        "type": "sdk_client_exception",
        "reason": "sdk_client_exception: Unable to execute HTTP request: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
        "caused_by": {
          "type": "i_o_exception",
          "reason": "sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
          "caused_by": {
            "type": "validator_exception",
            "reason": "validator_exception: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
            "caused_by": {
              "type": "sun_cert_path_builder_exception",
              "reason": "sun_cert_path_builder_exception: unable to find valid certification path to requested target"
            }
          }
        }
      }
    }
  },
  "status": 500
}

Progress ....

have you tried http just to test?

When you curl do is there any issue with the cert?

Does the curl go through the proxy? Are you really using a proxy?

Perhaps try..

s3.client.default.endpoint: ns-elasticlogmngt.endpoint.co.id

I am running out ideas

weird that I don't have problem when using curl

maybe I will try http for now, next I'll try to change the endpoint

I also try to contact my unix team to help me with the cert, if issues still persist, I think I will conclude this case, maybe my prem s3 is not compatible enough

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.