First, note that I initially tried to recover from the NFS mount. When that didn't work, I copied the backups to a local folder and tried to restore from there. The old path.repo is commented out.
elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: Test-Cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# --------------------------HTTP--------------------------------
http.host: "10.3.0.39"
# http.host: "<hostIP>"
http.cors.enabled: true
http.cors.allow-origin: "*"
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
#path.repo: ["/media/nfs/es/","/media/nfs2/es/"]
path.repo: ["/esrestore/"]
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.3.0.39
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-discovery-zen.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Calls to initialize the repo, and then create the snapshots. It's in a bash script so you'll see variable in there, but it does work, as the repo and snapshots exist on original server. Let me know if you'd like these expanded:
sudo curl -g -H "Content-Type: application/json" -X PUT -d '{ "type": "fs", "settings": { "location": "'$SAVEPATH'","max_snapshot_bytes_per_sec" : "50mb","max_restore_bytes_per_sec" : "50mb" } }' http://:9200$HOSTIP/_snapshot/$newRepositoryName
sudo curl -g -H "Content-Type: application/json" -X PUT -d '{ "type": "fs", "settings": { "location": "'$SAVEPATH/$SNAPSHOTNAME'" } }' "http://:9200$HOSTIP/_snapshot/$newRepositoryName/$SNAPSHOTNAME?wait_for_completion=true&pretty"
Call to see repos:
user@backuptestnew:~$ curl -XGET '10.3.0.39:9200/_cat/repositories?v'
id type
ESCB1Cluster fs
Call to see snapshots:
user@backuptestnew:~$ curl -XGET '10.3.0.39:9200/_snapshot/ESCB1Cluster/_all?pretty=true'
{
"snapshots" : [ ]
}
or using cat: curl -XGET '10.3.0.39:9200/_cat/snapshots/ESCB1Cluster'
returns nothing at all
The same call on the original host, I'm only including the cat one because the _snapshot one is too verbose:
user@original:/usr/local/bin$ curl -XGET '10.3.0.28:9200/_cat/snapshots/ESCB1Cluster'
2017-07-20_16-15 SUCCESS 1500581754 16:15:54 1500585295 17:14:55 59m 66 88 0 88
2017-07-20_23-20 SUCCESS 1500607223 23:20:23 1500607238 23:20:38 14.9s 66 88 0 88
2017-07-21_00-00 SUCCESS 1500609645 00:00:45 1500609654 00:00:54 8.8s 66 88 0 88
2017-07-21_02-00 SUCCESS 1500616846 02:00:46 1500616856 02:00:56 9.9s 66 88 0 88
2017-07-21_03-01 SUCCESS 1500620472 03:01:12 1500620482 03:01:22 9.9s 66 88 0 88
2017-07-21_04-00 SUCCESS 1500624058 04:00:58 1500624069 04:01:09 11.6s 66 88 0 88
2017-07-21_05-01 SUCCESS 1500627663 05:01:03 1500627672 05:01:12 9.6s 66 88 0 88
2017-07-21_06-01 SUCCESS 1500631266 06:01:06 1500631275 06:01:15 9.5s 66 88 0 88
...etc
Any thoughts/ideas?