Excuse me, It is not clear for me. Suppose, Cluster 1 has physical problems(its burned) and I copied the snapshot from it and want to put it on Cluster 2 for restoring then?
I guess that I must create a directory under "/var/log" with the name "back-long" and then create another directory in "back-long" with the name "repo" and put snapshot files in it. Am I on a right track?
Your repository should be a shared network filesystem, like NFS. Each master and data node in the cluster must have full read/write privileges to that shared filesystem, with similarly mapped permissions (each must have the same elasticsearch
user with the same uid/gid).
With multiple clusters, you use the exact same repository creation command (as with the one I just demonstrated and you already used—I just quoted your own command). There is no tarring or zipping, or moving files about. A snapshot writes to the shared filesystem. A restore reads from the shared filesystem. There is no concept of a "burned" cluster here, as the shared filesystem should be completely separate from the cluster itself. If the shared filesystem were to have data loss, then you'd use a regular backup/recovery process to restore those files. Most people have RAID redundancy to cover those losses though, so it would be uncommon for that to happen.
OK, I understand you. I know servers never burn but I want to test it. Assumed It is not on NFS and I copied snapshot and moved it on another server and like to restore it. then?
So, you copied the entire repository to another machine? That machine must be a network filesystem, and then it must be in the same path, and mounted the same way to the same path. After that, it is the same as previously described. The repository still must be created the same way on the new cluster.
I want to reiterate that this is extremely unconventional, uncommon, and not a likely scenario to ever encounter in a production environment, though certainly not outside the realm of impossible. It is not the best way to learn how to handle snapshot/restore as it is making things far more complicated than they need to be.
Thank you. Yes, I did.
I changed my "elasticsearch.yml" as below:
path.repo: ["/var/log/back/repo","/var/log/back-long/repo"]
and run below commands:
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"compress" : true,
"location": "/var/log/back-long/repo"
}
}
result is:
{"acknowledged":true}
Then:
GET /_snapshot/my_backup/_all
Result is:
{
"snapshots": [
{
"snapshot": "snapshot-number-one",
"uuid": "fsCaA1ZTSCWgvSMO7Ukz7A",
"version_id": 5040199,
"version": "5.4.1",
"indices": [
"index_name-2017.10.02",
"index_name-2017.10.03",
"server2-2017.08.28",
"beat-2017.10.02",
"server1-2017.09.01",
"server1-2017.08.28",
".kibana",
"beat-2017.10.03"
],
"state": "SUCCESS",
"start_time": "2017-10-10T16:04:56.216Z",
"start_time_in_millis": 1507651496216,
"end_time": "2017-10-10T16:04:59.461Z",
"end_time_in_millis": 1507651499461,
"duration_in_millis": 3245,
"failures": [],
"shards": {
"total": 36,
"failed": 0,
"successful": 36
}
}
]
}
Then:
POST /_snapshot/my_backup/snapshot-number-one/_restore
Result is:
{
"error": {
"root_cause": [
{
"type": "snapshot_restore_exception",
"reason": "[my_backup:snapshot-number-one/fsCaA1ZTSCWgvSMO7Ukz7A] cannot restore index [.kibana] because it's open"
}
],
"type": "snapshot_restore_exception",
"reason": "[my_backup:snapshot-number-one/fsCaA1ZTSCWgvSMO7Ukz7A] cannot restore index [.kibana] because it's open"
},
"status": 500
}
It tell me "Kibana" is open and I tried to close it:
POST /.kibana/_close
Result is:
{
"statusCode": 400,
"error": "Bad Request",
"message": "child \"method\" fails because [\"method\" must be one of [HEAD, GET, POST, PUT, DELETE]]",
"validation": {
"source": "query",
"keys": [
"method"
]
}
}
Why?
That error is because there is already a .kibana index. If Kibana is running, it will automatically create that index. To restore all indices, including .kibana, you must shut down Kibana and delete .kibana before running the restore.
How? Can you tell me the commands?
DELETE /.kibana
Or
curl -XDELETE http://host:port/.kibana
But Kibana must be halted first. If other indices were restored, even partially, they will also need to be deleted before starting the restore.
Thank you. Problem solved and all data restored via below commands.
I did:
[root@localhost ~]# curl -XDELETE http://localhost:9200/.kibana
Result is:
{"acknowledged":true}
Then:
POST /.kibana/_close
Result is:
{"acknowledged":true}
Then:
POST /_snapshot/my_backup/snapshot-number-one/_restore
Result is:
{
"accepted": true
}
Then:
GET /_snapshot/my_backup/snapshot-number-one
Result is:
{
"snapshots": [
{
"snapshot": "snapshot-number-one",
"uuid": "fsCaA1ZTSCWgvSMO7Ukz7A",
"version_id": 5040199,
"version": "5.4.1",
"indices": [
"index_name-2017.10.02",
"index_name-2017.10.03",
"server2-2017.08.28",
"beat-2017.10.02",
"server1-2017.09.01",
"server1-2017.08.28",
".kibana",
"beat-2017.10.03"
],
"state": "SUCCESS",
"start_time": "2017-10-10T16:04:56.216Z",
"start_time_in_millis": 1507651496216,
"end_time": "2017-10-10T16:04:59.461Z",
"end_time_in_millis": 1507651499461,
"duration_in_millis": 3245,
"failures": [],
"shards": {
"total": 36,
"failed": 0,
"successful": 36
}
}
]
}
Then:
POST /.kibana/_open
Result is:
{
"acknowledged": true
}
Then:
GET _cat/indices?pretty
Result is:
yellow open server2-2017.08.28 skkJ20qtQ0WLQtauVDW10Q 5 1 6 0 121.2kb 121.2kb
yellow open beat-2017.10.17 5rClyycsTlmpupRHC92zsQ 5 1 3 0 19.2kb 19.2kb
yellow open beat-2017.10.02 2jQ1Qd7MTp2f31AfH6DJ2g 5 1 3 0 18.7kb 18.7kb
yellow open .kibana rRl_GVRORZOCm3wc8LgI5w 1 1 3 0 23.3kb 23.3kb
yellow open index_name-2017.10.02 kmufBQbuShGRULwMGjS-Fg 5 1 3 0 19.2kb 19.2kb
yellow open index_name-2017.10.03 dXm1JxZ1RWett1NJ6y_hEQ 5 1 3 0 26.7kb 26.7kb
yellow open beat-2017.10.03 nA8NBwRoS3iN9zNYaLQFIg 5 1 3 0 11.1kb 11.1kb
yellow open server1-2017.09.01 jHp_oaLuRB2rAcjzuguUKQ 5 1 17 0 217kb 217kb
yellow open server1-2017.08.28 gDmlo8_4QvK_iQpx78JdoA 5 1 2 0 40.3kb 40.3kb
yellow open index_name-2017.10.17 s1hIsiZgSfCTK8N14WYsCA 5 1 3 0 11.1kb 11.1kb
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.