S3 Repository Plugin - curl: (52) Empty reply from server

Hi All,

I am struggling to put data node's data to S3 Bucket and getting empty response from server. can u please suggest what could be the reason ?

--elasticsearch.yml has

path.repo: $s3_Bucket/$s3_Prefix/docroot

-- keystore has

echo "$access_key" | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.access_key
echo "$secret_key" | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.secret_key

bucket=$(echo $s3_B)
referenceStackName=$(echo $s3_P)/docroot
access_key="{KEY}"
secret_key="{SECRET_KEY}"
--REQUEST
curl -v -XPUT -u "USERNAME":"PASSWORD" "localhost:9200/_snapshot/s3_backup?pretty" -H 'Content-Type: application/json' -d'
{
"type": "s3",
"settings": {
"bucket": "$bucket",
"base_path": "$referenceStackName",
"proxy.host": "{PROXY_HOST}",
"access_key":"$access_key",
"secret_key":"$secret_key",
"proxy.port": "{PROXY_PORT}",
"compress": "true"
}
}'

--Response

  • Trying 127.0.0.1...
  • TCP_NODELAY set
  • Connected to localhost (127.0.0.1) port 9200 (#0)
  • Server auth using Basic with user 'XXXXXXX'

PUT /_snapshot/s3_backup HTTP/1.1
Host: localhost:9200
Authorization: Basic XXXXXXXXXXXX
User-Agent: curl/7.61.1
Accept: /
Content-Type: application/json
Content-Length: 90

  • upload completely sent off: 90 out of 90 bytes
  • Empty reply from server
  • Connection #0 to host localhost left intact
    curl: (52) Empty reply from server

What is in the Elasticsearch logs? Do you have a proxy in front of Elasticsearch at all?

Thanks for your reply.

I had found solution to above problem, using "https"

curl -k -v -XPUT -u "USERNAME":"PASSWORD" "https://localhost:9200/_snapshot/s3_backup?pretty" -H 'Content-Type: application/json' -d'

But I am still not able to take backup on S3 due to

path is not accessible on master node Error

I am checking the root cause, but I am quite sure, IAM user has the right access.

I run below command -

curl -k -v -XPUT -u "USERNAME:PASSWORD" "https://localhost:9200/_snapshot/test3?pretty" -H 'Content-Type: application/json' -d'

{
"type": "s3",
"settings": {
"bucket": "mybucket",
"base_path": "private/temp",
"proxy.host": "PROXYHOST",
"proxy.port": "PROXYPORT",
"compress": "true"

}

}'

I face this error -
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[test3] path [private/temp] is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[test3] path [private/temp] is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [private/temp/tests-s1D3r1F2S-mA122xyltdww/master.dat] using a single upload",
"caused_by" : {
"type" : "amazon_s3_exception",
"reason" : "amazon_s3_exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: BBF014C7A886741A; S3 Extended Request ID: CrUNc9ojqdV6AJLEKP3453453454355k2AZ8iKC6csmocr/WahCjgg3DblnA=)"
}
}
},
"status" : 500

The repository need to be configured on all master and data nodes and it seems this may not be the case here.Please check your node configuration.

I did configure the repository.

I issue to capture Snapshot was resloved after removing proxy url and port.
Thanks for @warkolm.

However now I am not able to restore the snapshots. Following is the policy I have applied

curl -k -v -XPUT -u "es_account":"ES_ACCOUNT_PASSWORD" "https://localhost:9200/_slm/policy/night-snpsht1?pretty" -H 'Content-Type: application/json' -d'
{
"schedule": "0 30 1 * * ?",
"name": "<n-s1-{now/d}>",
"repository": "s3_backup",
"config": {
"indices": ["*"]
},
"retention": {
"expire_after": "15d",
"min_count": 5,
"max_count": 50
}
}'

Request -
After executing -
https://localhost:9200/_slm/policy/night-snpsht1/_execute

Response
{
"snapshot_name": "n-s1-2020.12.30-d52_0hv9qmsemma75yfqtw"
}

However when I run below

Request -

https://localhost:9200/_snapshot/s3_backup/n-s1-2020.12.30-d52_0hv9qmsemma75yfqtw/_restore

I get below response -
{
"error": {
"root_cause": [
{
"type": "snapshot_restore_exception",
"reason": "[s3_backup:n-s1-2020.12.30-d52_0hv9qmsemma75yfqtw/sZpqtI8_S7y-kbO28cSZ5Q] cannot restore index [.slm-history-1-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"
}
],
"type": "snapshot_restore_exception",
"reason": "[s3_backup:n-s1-2020.12.30-d52_0hv9qmsemma75yfqtw/sZpqtI8_S7y-kbO28cSZ5Q] cannot restore index [.slm-history-1-000001] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"
},
"status": 500
}

I was able to restore indexes, when I specified specific indexes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.