Snapshot failed, all data cannot be restored

I have 3 nodes, lets say node1, node1, and node3

i have succesfully restored the data from my nodes,
but the data decrease fro, 380k data to 77k data.

this is what i get from backup data
"hits": {
"total": 76908,
"max_score": 1,

and this is my data
"hits": {
"total": 384959,
"max_score": 1,

how can i get all my data?

i want to migrate my ES Data to another VM / IP / Instances

Can you elaborate more about it?

  1. How you performed snapshot and restore operations?
  2. Did you check the index document size at the time of snapshot?
  3. Did the snapshot complete successfully?
1 Like

this is exactly step i used to restore my data :

  1. PUT /_snapshot/es_backup?verify=false
    {
    "type": "fs",
    "settings": {
    "location": "/etc/elasticsearch/es_backup",
    "compress": true
    }
    }

  1. PUT /_snapshot/es_backup/4dec1048
    {
    "indices": "log",
    "ignore_unavailable": true,
    "include_global_state": false
    }

  1. and then i try to see my data , so i restored it on the same nodes with :
    POST _snapshot/es_backup/4dec1048/_restore
    {
    "indices": "log",
    "rename_pattern": "log"
    ,"rename_replacement": "log-backup"
    }
  2. then i want to see how much document that i have :
    GET log-backup/log/_search

my data that i want to back up is 222.mb with 380k document
and the data that i backuped 44,7mb with 77k document

es_backup status :
"snapshots": [
{
"snapshot": "4dec1048",
"repository": "es_backup",
"uuid": "5KphFPQVQ0Sp6o92KvOjfw",
"state": "SUCCESS",
"include_global_state": false,
"shards_stats": {
"initializing": 0,
"started": 0,
"finalizing": 0,
"done": 0,
"failed": 0,
"total": 0
},
"stats": {
"number_of_files": 0,
"processed_files": 0,
"total_size_in_bytes": 0,
"processed_size_in_bytes": 0,
"start_time_in_millis": 0,
"time_in_millis": 0
},
"indices": {}
}
]

Thank you for sharing the exact commands you're using, it makes it so much easier to help.

I see a couple of strange things:

Why ?verify=false?

Why not false?

These options are both asking Elasticsearch to be lenient in the case of problems, but it sounds like you do not want this lenience.

Thankyou for your answer David!

This is why i turn verify to false :
"reason": "[es_logbak] [[uw5kPLo6TwC_ZhKTlQat8Q, 'RemoteTransportException[[node-05][10.32.12.85:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[es_logbak] a file written by master to the store [/etc/elasticsearch/es_backup] cannot be accessed on the node [{node-05}{uw5kPLo6TwC_ZhKTlQat8Q}{LM5ec5DrTHWHxiGpaZaKpg}{10.32.12.85}{10.32.12.85:9300}]. This might indicate that the store [/etc/elasticsearch/es_backup] is not shared between this node and the master node or that permissions on the store don't allow reading files written by the master node];'], [b1q4PAqMQseSROz3Xg7u0A, 'RemoteTransportException[[node-06][10.32.12.86:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[es_logbak] a file written by master to the store [/etc/elasticsearch/es_backup] cannot be accessed on the node [{node-06}{b1q4PAqMQseSROz3Xg7u0A}{ZHXd0YmNSEWx6ZK3wcQNuA}{10.32.12.86}{10.32.12.86:9300}]. This might indicate that the store [/etc/elasticsearch/es_backup] is not shared between this node and the master node or that permissions on the store don't allow reading files written by the master node];']]"

i tried
// "ignore_unavailable": true,

but still same :frowning:

That'd explain it:

Your shared-filesystem repository (NFS?) is not properly accessible across all nodes in your cluster.

Your shared-filesystem repository (NFS?) is not properly accessible across all nodes in your cluster.

ah i see, i try to fix this problem first.
Thankyou David!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.