How to Restore snapshop from 5 node cluster to single node host?

Hello all,

I am trying to restore a snapshot from a 5 node v6.8 to a single v7.9 host.
When restoring the snapshot it finishes almost instantly. So no data was imported as its 166gb snapshot. My status goes red and shows 5 unassigned shards and that's it. How can I get this to restore?

Anything in logs?

I do not see anything in logs. Using the kibana console I see these:
allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"explanation" : "primary shard for this replica is not yet active"

What exact command did you use?

honestly I have tried a ton I have found online, these are the latest two from my notes.

POST /_snapshot/my_backup/my_snapshot20201101/_restore

{
"indices": "data_stream_1,index_1,index_2",
"ignore_unavailable": true,
"include_global_state": false,
"rename_pattern": "index_(.+)",
"rename_replacement": "restored_index_$1",
"include_aliases": false
}

and

curl --user elastic -XPUT "localhost:9200/my_snapshot20201101/_settings?pretty" -H 'Content-Type: application/json' -d' { "number_of_replicas": 0 }'

What is the output of:

GET /
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v

If some outputs are too big, please share them on gist.github.com and link them here.

Could you run that on both clusters?

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.

That's the expected behaviour, a 200 OK just means the restore has been initialised. You can use the index recovery API to monitor its progress, and cluster allocation explain to determine why any shards aren't allocated.

Replicas cannot be allocated before the primary. You need to determine why the primary isn't active.

Looks like after sitting there for a while this error is now also showing
All errors posted are from using "GET /_cluster/allocation/explain"

"explanation" : "shard has failed to be restored from the snapshot [my_backup:my_snapshot20201101/cztjiMTZQ_OlZSGDADFqVg] because of [restore_source[my_backup/my_snapshot20201101]] - manually close or delete the index [my_snapshot20201101] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"

That is the thing, the recovery API shows nothing. All it ever shows is { }. Please see my reply about new error showing using the explain command.

I don't think this is the pertinent part of the response from GET /_cluster/allocation/explain. Please share the whole output.

Thank you for your reply and help. I also added one reply with more info of an error about forcing a primary shard.

{
"name" : "MY_HOST",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Q-kDH2ZlSMmurA1HTM4r-g",
"version" : {
"number" : "7.9.3",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "c4138e51121ef06a6404866cddc601906fe5c868",
"build_date" : "2020-10-16T10:36:16.141335Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 52 42 0 0.00 0.01 0.05 dilmrt * MY_HOST

epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1605794717 14:05:17 elasticsearch red 1 1 37 37 0 0 10 0 - 78.7%

There is a bunch of indicies with monitoring, kibana etc. I would guess this is the one you would like to see.
red open my_snapshot20201101 GOJmqrJhTpeB3sjNBnkQwA 5 1

The primary cluster is custom with no web. It's all command line and I would not be able to post much about it. I know this might make things difficult, but if there is something in particular that is a must I can get it out and modify what cannot be posted publicly.

{
  "index" : "my_snapshot20201101",
  "shard" : 4,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NEW_INDEX_RESTORED",
    "at" : "2020-11-17T21:32:40.964Z",
    "details" : "restore_source[my_backup/my_snapshot20201101]",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "PkgLXJrvSFuQhV-wkw2AnA",
      "node_name" : "MY_HOST",
      "transport_address" : "127.0.0.1:9300",
      "node_attributes" : {
        "ml.machine_memory" : "33521811456",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "20"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [my_backup:my_snapshot20201101/cztjiMTZQ_OlZSGDADFqVg] because of [restore_source[my_backup/my_snapshot20201101]] - manually close or delete the index [my_snapshot20201101] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
        },
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : """node does not match index setting [index.routing.allocation.require] filters [box_type:"warm"]"""
        }
      ]
    }
  ]
}

Ok the problem is this:

The reply did not go to you, so I tried to remove it and reply but error. Hopefully this works.

How would I tell it to ignore, change, fix that? I thought it ignored this if the setting do not use global.

That's not a global setting so include_global_state has no effect. You're looking for ignore_index_settings instead -- see the docs for further details.

Is this correct? It made no change to the error.

POST /_snapshot/my_backup/my_snapshot20201101/_restore

{
  "ignore_index_settings": "warm",
  "indices": "data_stream_1,index_1,index_2",
  "ignore_unavailable": true,
  "include_global_state": false,              
  "include_aliases": false
  }

It did not respond to you again, not sure what I keep hitting. Please see my reply with settings used.

There are 2 reply buttons.

The red one replies to the thread where the green one replies to someone.

No, you need to list the settings to ignore under ignore_index_settings not their values. So index.routing.allocation.require.box_type.

I tried the following with no luck. If this is not correct would you be willing to provide the proper code for me?

PUT /_settings
{
"settings": {
"index.routing.allocation.require.box_type": "warm"
}
}