Restoring an index in an Ubuntu VirtualBox (stuck _restore)

Hi everyone,

I have created a snapshot from my index (26Gb) in a MAC OSX. I have an Ubuntu virtual machine running on a external HD with 300Gb.

I have shared the MAC's backup folder with the Ubuntu VM gave the VM user all the rights to access it in /media/sf_shared_backup.

Then, I created a backup in my VM elasticsearch as follows:

PUT 192.168.56.101:9200/_snapshot/my_backup

{
	"type": "fs",
    "settings": {
        "location": "/media/sf_shared_backup",
        "compress": true
    }
}

The shared folder in Ubuntu contains all the index data (26Gb). But when I execute

POST http://192.168.56.101:9200/_snapshot/my_backup/2017-02-24/_restore

Just the index mapping is created!

GET http://192.168.56.101:9200/_snapshot/my_backup/2017-02-24/_status

returns DONE to all shards, but

http://192.168.56.101:9200/delfos_index_homologacao/_stats

returns total = 0 documents created.

Besides that, the elasticsearch in the VM machine doesn't show any error message!

Is there anything I am missing in this process?

Thanks

Instead of sharing the backup folder, I've made a scp and copy all the snapshot to Ubuntu VM.

And unfortunately, anything changed.

Trying to clear a little....

GET http://192.168.56.101:9200/_snapshot/my_backup/2017-02-24/_status

{
"snapshots": [
{
"snapshot": "2017-02-24",
"repository": "my_backup",
"uuid": "gl7LdKEaTRGYiTBqdGOC1w",
"state": "SUCCESS",
"shards_stats": {
"initializing": 0,
"started": 0,
"finalizing": 0,
"done": 5,
"failed": 0,
"total": 5
},
"stats": {
"number_of_files": 355,
"processed_files": 355,
"total_size_in_bytes": 26070261182,
"processed_size_in_bytes": 26070261182,
"start_time_in_millis": 1487988320390,
"time_in_millis": 2296874
},
"indices": {
"delfos_index_homologacao": {
"shards_stats": {
"initializing": 0,
"started": 0,
"finalizing": 0,
"done": 5,
"failed": 0,
"total": 5
},
"stats": {
"number_of_files": 355,
"processed_files": 355,
"total_size_in_bytes": 26070261182,
"processed_size_in_bytes": 26070261182,
"start_time_in_millis": 1487988320390,
"time_in_millis": 2296874
},
"shards": {
"0": {
"stage": "DONE",
"stats": {
"number_of_files": 60,
"processed_files": 60,
"total_size_in_bytes": 5173044592,
"processed_size_in_bytes": 5173044592,
"start_time_in_millis": 1487988320390,
"time_in_millis": 949568
}
},
"1": {
"stage": "DONE",
"stats": {
"number_of_files": 70,
"processed_files": 70,
"total_size_in_bytes": 5515142542,
"processed_size_in_bytes": 5515142542,
"start_time_in_millis": 1487988320390,
"time_in_millis": 965197
}
},
"2": {
"stage": "DONE",
"stats": {
"number_of_files": 88,
"processed_files": 88,
"total_size_in_bytes": 5092188255,
"processed_size_in_bytes": 5092188255,
"start_time_in_millis": 1487989285675,
"time_in_millis": 876054
}
},
"3": {
"stage": "DONE",
"stats": {
"number_of_files": 66,
"processed_files": 66,
"total_size_in_bytes": 5133919859,
"processed_size_in_bytes": 5133919859,
"start_time_in_millis": 1487990090893,
"time_in_millis": 526371
}
},
"4": {
"stage": "DONE",
"stats": {
"number_of_files": 71,
"processed_files": 71,
"total_size_in_bytes": 5155965934,
"processed_size_in_bytes": 5155965934,
"start_time_in_millis": 1487989270189,
"time_in_millis": 820607
}
}
}
}
}
}
]
}

As you can see, it seems the _restore process has already finished, but

GET http://192.168.56.101:9200/delfos_index_homologacao/_stats

returns

{
    "_shards": {
    "total": 10,
    "successful": 0,
    "failed": 0
    },
    "_all": {
    "primaries": {},
    "total": {}
    },
    "indices": {}
    }

And when I try to delete the snapshot, I've just got the following exception:

DELETE http://192.168.56.101:9200/_snapshot/my_backup/2017-02-24/

{
  "error": {
    "root_cause": [
      {
        "type": "concurrent_snapshot_execution_exception",
        "reason": "[my_backup:2017-02-24/gl7LdKEaTRGYiTBqdGOC1w] cannot delete snapshot during a restore"
      }
    ],
    "type": "concurrent_snapshot_execution_exception",
    "reason": "[my_backup:2017-02-24/gl7LdKEaTRGYiTBqdGOC1w] cannot delete snapshot during a restore"
  },
  "status": 503
}

Is it still running or not? Is it stuck? But why did I get a SUCESS on _snapshot _status method??

I didn't restore it using wait_for_completion....

So, it's being very hard to figure out what is going on here.

Does anybody can give a hint?

Thanks a lot,

Guilherme

Thank you guys, but everything is ok now!

I don't know why the _status method returned SUCCESS before the shards have all documents allocated.

But anyway, it's working as expected!

Thank you!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.