Restore to new cluster - No MetaData Problem


(Bruce) #1

I'm testing the backup system. I want to be able to restore to a new/different cluster/host if for some reason the original crashes and is entirely unrecoverable.

So, I did a backup to an NFS storage on a backup machine. I then brought up a new server with the same version of ES (5.0.2). I mounted the same NFS target, added path.repo.

From there is the problem. While I can query the original and see all the snapshots, I cannot do so on the new one, even though reading from the same datasource.

I have seen something about Metadata, which is somehow separate from the data itself. Do I need to have that in order for the restore to work properly?

Should the Cluster and Node name be the same in order for the restore to work?

I DID do the "registering" thing, but no snapshots are able to be seen.

Any direction or guidance would be MOST appreciated, as a backup that can't function after a catastrophe is of no real use to anyone except the guy who sold you the hard drives to store it on.

Thanks in advance!


(Alexander Reelsen) #2

it would help a lot, if you provided configuration settings, reposity configurations, the snapshot and restore calls you executed, so people could follow your steps in more detail.


(Bruce) #3

First, note that I initially tried to recover from the NFS mount. When that didn't work, I copied the backups to a local folder and tried to restore from there. The old path.repo is commented out.
elasticsearch.yml

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: Test-Cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# --------------------------HTTP--------------------------------
http.host: "10.3.0.39"
# http.host: "<hostIP>"
http.cors.enabled: true
http.cors.allow-origin: "*"
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
#path.repo: ["/media/nfs/es/","/media/nfs2/es/"]
path.repo: ["/esrestore/"]
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.3.0.39
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-discovery-zen.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

Calls to initialize the repo, and then create the snapshots. It's in a bash script so you'll see variable in there, but it does work, as the repo and snapshots exist on original server. Let me know if you'd like these expanded:

sudo curl -g -H "Content-Type: application/json" -X PUT -d '{ "type": "fs", "settings": { "location": "'$SAVEPATH'","max_snapshot_bytes_per_sec" : "50mb","max_restore_bytes_per_sec" : "50mb" } }' http://:9200$HOSTIP/_snapshot/$newRepositoryName

sudo curl -g -H "Content-Type: application/json" -X PUT -d '{ "type": "fs", "settings": { "location": "'$SAVEPATH/$SNAPSHOTNAME'" } }' "http://:9200$HOSTIP/_snapshot/$newRepositoryName/$SNAPSHOTNAME?wait_for_completion=true&pretty"

Call to see repos:
user@backuptestnew:~$ curl -XGET '10.3.0.39:9200/_cat/repositories?v'
id type
ESCB1Cluster fs

Call to see snapshots:
user@backuptestnew:~$ curl -XGET '10.3.0.39:9200/_snapshot/ESCB1Cluster/_all?pretty=true'
{
"snapshots" : [ ]
}

or using cat: curl -XGET '10.3.0.39:9200/_cat/snapshots/ESCB1Cluster'
returns nothing at all

The same call on the original host, I'm only including the cat one because the _snapshot one is too verbose:
user@original:/usr/local/bin$ curl -XGET '10.3.0.28:9200/_cat/snapshots/ESCB1Cluster'
2017-07-20_16-15 SUCCESS 1500581754 16:15:54 1500585295 17:14:55 59m 66 88 0 88
2017-07-20_23-20 SUCCESS 1500607223 23:20:23 1500607238 23:20:38 14.9s 66 88 0 88
2017-07-21_00-00 SUCCESS 1500609645 00:00:45 1500609654 00:00:54 8.8s 66 88 0 88
2017-07-21_02-00 SUCCESS 1500616846 02:00:46 1500616856 02:00:56 9.9s 66 88 0 88
2017-07-21_03-01 SUCCESS 1500620472 03:01:12 1500620482 03:01:22 9.9s 66 88 0 88
2017-07-21_04-00 SUCCESS 1500624058 04:00:58 1500624069 04:01:09 11.6s 66 88 0 88
2017-07-21_05-01 SUCCESS 1500627663 05:01:03 1500627672 05:01:12 9.6s 66 88 0 88
2017-07-21_06-01 SUCCESS 1500631266 06:01:06 1500631275 06:01:15 9.5s 66 88 0 88
...etc

Any thoughts/ideas?


(Bruce) #4

Thanks for replying! My previous post was ignored for quite a while. I added the info you asked for by replying to my question so we don't get replies to replies to replies.


(Bruce) #5

Still need a hand on this please.


(Bruce) #6

So I have a question: Does Elasticsearch deliberately not respond to questions on the forums here so that we are forced to buy the Year support packages for a single issue? Because it definitely feels like it...


(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.