I am using ES 6.2.4
I have a remote server which cannot be accessed by our Network. It has the data which I need to take backup of. I created a snapshot there and physically copied created files to my system an restored it on my ES node. The version of both nodes are the same. Restore status is true but then when I searched those indices I get all shards failed. Also I check health status of shards and and found that the shards where in red State and unassigned.
What did I do wrong or is it not possible to restore such a way?
It should indeed be possible to take a snapshot, copy the repository elsewhere, and restore from the copy of the repository.
What does this mean? Can you share the exact response?
The allocation explain API will give more detail about why. Can you share that here?
The nodes are not working on my system and the files are also not getting transferred. The files have to be transferred and the data is also not be accessed on the iTunes as well and giving iTunes error 9 while accessing the data.
First of all thanks a lot for replying.
by restore status what i meant was when i restore using following command in kibana:
POST /_snapshot/my_backup/snapshot_1/_restore
I get following response:
{
"accepted": true
}
I think this means the restore was successful?
And for your second question:
i ran the following command as you said:
GET /_cluster/allocation/explain
and got this response:
{
"index": "test-2019.10",
"shard": 0,
"primary": false,
"current_state": "unassigned",
"unassigned_info": {
"reason": "CLUSTER_RECOVERED",
"at": "2019-03-26T10:10:42.094Z",
"last_allocation_status": "no_attempt"
},
"can_allocate": "no",
"allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions": [
{
"node_id": "HNviUh9GQXWifwzJ5Y-ZkQ",
"node_name": "HNviUh9",
"transport_address": "127.0.0.1:9300",
"node_decision": "no",
"deciders": [
{
"decider": "enable",
"decision": "NO",
"explanation": "no allocations are allowed due to cluster setting [cluster.routing.allocation.enable=none]"
},
{
"decider": "same_shard",
"decision": "NO",
"explanation": "the shard cannot be allocated to the same node on which a copy of the shard already exists [[test-2019.10][0], node[HNviUh9GQXWifwzJ5Y-ZkQ], [P], s[STARTED], a[id=0KEt19j-RL6w8ZqFO0sfwQ]]"
}
]
}
]
}
Do Notice that this index is one of the indices which i restored. I don't know why details of only this index was shown. I'm new in ELK Stack.
It does not. Restores happen asynchronously. You can either monitor the progress of the restore you just started or do POST /_snapshot/my_backup/snapshot_1/_restore?wait_for_completion=true
to wait for completion.
You have disabled allocation, so although you're trying to restore these shards they cannot be allocated anywhere. You need to set cluster.routing.allocation.enable
to something else.
what do you mean by something else?
Should I do this before taking the snapshot, or should i set this in my system and then do restore.? or both?
Sorry, that something else was supposed to be a link to the docs for that setting. Usually the default is what you want, but I don't know why you set it to none
so I can't say which of the options you should choose.
It doesn't have any effect on taking a snapshot, and any value but none
will allow the restore to succeed. You can set it at any time, including after starting the restore.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.