I am trying to restore a snapshot from a 5 node v6.8 to a single v7.9 host.
When restoring the snapshot it finishes almost instantly. So no data was imported as its 166gb snapshot. My status goes red and shows 5 unassigned shards and that's it. How can I get this to restore?
I do not see anything in logs. Using the kibana console I see these:
allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"explanation" : "primary shard for this replica is not yet active"
GET /
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v
If some outputs are too big, please share them on gist.github.com and link them here.
Could you run that on both clusters?
Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.
Or use markdown style like:
```
CODE
```
This is the icon to use if you are not using markdown format:
There's a live preview panel for exactly this reasons.
Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
That's the expected behaviour, a 200 OK just means the restore has been initialised. You can use the index recovery API to monitor its progress, and cluster allocation explain to determine why any shards aren't allocated.
Replicas cannot be allocated before the primary. You need to determine why the primary isn't active.
Looks like after sitting there for a while this error is now also showing
All errors posted are from using "GET /_cluster/allocation/explain"
"explanation" : "shard has failed to be restored from the snapshot [my_backup:my_snapshot20201101/cztjiMTZQ_OlZSGDADFqVg] because of [restore_source[my_backup/my_snapshot20201101]] - manually close or delete the index [my_snapshot20201101] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 52 42 0 0.00 0.01 0.05 dilmrt * MY_HOST
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1605794717 14:05:17 elasticsearch red 1 1 37 37 0 0 10 0 - 78.7%
There is a bunch of indicies with monitoring, kibana etc. I would guess this is the one you would like to see.
red open my_snapshot20201101 GOJmqrJhTpeB3sjNBnkQwA 5 1
The primary cluster is custom with no web. It's all command line and I would not be able to post much about it. I know this might make things difficult, but if there is something in particular that is a must I can get it out and modify what cannot be posted publicly.
{
"index" : "my_snapshot20201101",
"shard" : 4,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "NEW_INDEX_RESTORED",
"at" : "2020-11-17T21:32:40.964Z",
"details" : "restore_source[my_backup/my_snapshot20201101]",
"last_allocation_status" : "no"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "PkgLXJrvSFuQhV-wkw2AnA",
"node_name" : "MY_HOST",
"transport_address" : "127.0.0.1:9300",
"node_attributes" : {
"ml.machine_memory" : "33521811456",
"xpack.installed" : "true",
"transform.node" : "true",
"ml.max_open_jobs" : "20"
},
"node_decision" : "no",
"weight_ranking" : 1,
"deciders" : [
{
"decider" : "restore_in_progress",
"decision" : "NO",
"explanation" : "shard has failed to be restored from the snapshot [my_backup:my_snapshot20201101/cztjiMTZQ_OlZSGDADFqVg] because of [restore_source[my_backup/my_snapshot20201101]] - manually close or delete the index [my_snapshot20201101] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
},
{
"decider" : "filter",
"decision" : "NO",
"explanation" : """node does not match index setting [index.routing.allocation.require] filters [box_type:"warm"]"""
}
]
}
]
}
That's not a global setting so include_global_state has no effect. You're looking for ignore_index_settings instead -- see the docs for further details.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.