Elasticsearch snapshot/restore to s3

Hi,

I have installed elasticsearch 8.6.2 & kibana 8.6.2 on the same standalone server for testing purpose.
Planning to place the data snapshot to S3 and restore, but facing issues while creating the repository(it's not getting connected). From server i been able to communicate with S3 but not with kibana UI. I receive the following error during verification status. Can someone help on this


  "name": "ResponseError",
  "meta": {
    "body": {
      "error": {
        "root_cause": [
          {
            "type": "repository_verification_exception",
            "reason": "[****] path  is not accessible on master node"
          }
        ],
        "type": "repository_verification_exception",
        "reason": "[****] path  is not accessible on master node",
        "caused_by": {
          "type": "i_o_exception",
          "reason": "Unable to upload object [tests-2cIEc1oJRoabyhojn98MQg/master.dat] using a single upload",
          "caused_by": {
            "type": "amazon_s3_exception",
            "reason": "The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: NSHPXN230XB9DBTR; S3 Extended Request ID: G15vrRvWfJuf/OSnqEjw9lA/o7tV48ac9ICgWz6yfcD703akbi/zIeneVJ6vM+OrHV19+wLhVvg=; Proxy: null)"
          }
        }
      },
      "status": 500
    },
    "statusCode": 500,
    "headers": {
      "x-opaque-id": "47b9fb6e-9077-4ddf-ba5c-2aaf6c85db3e;kibana:application:management:",
      "x-elastic-product": "Elasticsearch",
      "content-type": "application/json;charset=utf-8",
      "content-length": "721"
    },
    "meta": {
      "context": null,
      "request": {
        "params": {
          "method": "POST",
          "path": "/_snapshot/****/_verify",
          "querystring": "",
          "headers": {
            "user-agent": "Kibana/8.6.2",
            "x-elastic-product-origin": "kibana",
            "authorization": "Basic ZWxhc3RpYzo5QnArX0FmV3ZNc05rTngwNFVKcQ==",
            "x-opaque-id": "47b9fb6e-9077-4ddf-ba5c-2aaf6c85db3e;kibana:application:management:",
            "x-elastic-client-meta": "es=8.4.0p,js=16.18.1,t=8.2.0,hc=16.18.1",
            "accept": "application/vnd.elasticsearch+json; compatible-with=8,text/plain"
          }
        },
        "options": {
          "opaqueId": "47b9fb6e-9077-4ddf-ba5c-2aaf6c85db3e;kibana:application:management:",
          "headers": {
            "x-elastic-product-origin": "kibana",
            "user-agent": "Kibana/8.6.2",
            "authorization": "Basic ZWxhc3RpYzo5QnArX0FmV3ZNc05rTngwNFVKcQ==",
            "x-opaque-id": "47b9fb6e-9077-4ddf-ba5c-2aaf6c85db3e",
            "x-elastic-client-meta": "es=8.4.0p,js=16.18.1,t=8.2.0,hc=16.18.1"
          }
        },
        "id": 1
      },

Thanks,
Seetharaman

How did you test this?

Kibana is UI for Elasticsearch, it does not talk with S3, it is your node that will talk to it, your error is not a communication error, but a permissions error.

Check the message:

The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: NSHPXN230XB9DBTR; S3 Extended Request ID: G15vrRvWfJuf/OSnqEjw9lA/o7tV48ac9ICgWz6yfcD703akbi/zIeneVJ6vM+OrHV19+wLhVvg=; Proxy: null

You need to double check the permissions you used while creating the repository.

Hi leandrojmp,

I have tested from node which holds both elasticsearch n kibana using aws cli..
Node able to communicate with s3 and also uploaded objects to specified bucket, but UI says its a invalid key n permission Where's node doesn't.

Thanks,
Seetharaman

Also made the bucket to public which in turn returns the same error.

How did you add the credentials in Elasticsearch? According to this documentation?

Did you restarted the node or reloaded the secure settings after adding the credentials?

Yes i have added the keys both in elasticsearch keystore and aws configure and also restarted all the services and node.

How to reload secure settings?

If you restarted the nodes, that should be ok.

Restart doesn't work david

You need to share more I think:

  • What exact command did you ran to update your credentials?
  • How did you define your repository?

May be share the output of:

GET /_snapshot

What exact command did you ran to update your credentials?
/usr/share/elasticsearch/bin/.elasticsearch-keystore add keys

GET /_snapshot

{
  "elk-demo": {
    "type": "s3",
    "settings": {
      "bucket": "elk-cn-registry"
    }
  }
}

POST /_snapshot/elk-demo/_verify

{
  "error": {
    "root_cause": [
      {
        "type": "repository_verification_exception",
        "reason": "[elk-demo] path  is not accessible on master node"
      }
    ],
    "type": "repository_verification_exception",
    "reason": "[elk-demo] path  is not accessible on master node",
    "caused_by": {
      "type": "i_o_exception",
      "reason": "Unable to upload object [tests-amh18k19S_KlnGGGvMfcuA/master.dat] using a single upload",
      "caused_by": {
        "type": "amazon_s3_exception",
        "reason": "The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: QATQ4EDR2ZTEE5HK; S3 Extended Request ID: i7fJupZH2EbZ/KYZrnjBLpG//3JRrK6BAXmC6wn2s4Rlw/YXkv5yDcAn+Zmns1QEFtQuXxzR5ME=; Proxy: null)"
      }
    }
  },
  "status": 500
}

Which commands exactly did you run?

You need to run 2 commands.

One to add the access_key

bin/elasticsearch-keystore add s3.client.default.access_key

And one to add the secret_key

bin/elasticsearch-keystore add s3.client.default.secret_key

I have added both the keys access n and secret to the respective keystore path

Could you run this:

bin/elasticsearch-keystore list

And share here the output without touching anything?

sure, please find the output below

image

Did you do that on all nodes?

Just one node david.. does metrics server needs this configurations??? We are collecting only logs n metrics from that server.

What's the relationship with logs and metrics when you are talking first of doing backups with s3 repositories ?

I meant that every node in the cluster needs those settings.

Hi David,

We have only one node that has Elasticsearch & kibana and i have added the keys in that node.
Another machine is a client, we have just installed the filebeat,metricbeat for monitoring purpose.

One more question, does basic license involved snapshot/restore feature?

If you added the keys and restarted the node, there is not much to troubleshoot, the error is pretty key the credentials are not working.

I would remove the setting from the keystore and add them again double checking the values.

Yes, snapshot and restore works with the basica license.