Listing stored snapshots - RepositoryMissingException - New, empty cluster

Am testing on a lab cluster

5 nodes.
Repository name = "Snapshots"

Immediately after creating the snapshots, listing the snapshots returned
the snapshots and the indices they contained.

Now, am attempting to restore the snapshots to an empty cluster.
The new cluster looks good from the following:

curl -XGET localhost:9200/_cluster/health

{"cluster_name":"elasticsearch","status":"green","timed_out":false,"number_of_nodes":5,"number_of_data_nodes":5,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

From the master node on this new cluster, ran the following to list all
snapshots eligible to be restored. The node that's named is the local
node. Have verified the node has full access to the shared file system and
access to the snapshot repository.

curl -XGET localhost:9200/_snapshot/Snapshots/_all

{"error":"RemoteTransportException[[ELASTICSEARCH-1][inet[/192.168.248.147:9300]][cluster/snapshot/get]];
nested: RepositoryMissingException[[Snapshots] missing]; ","status":404}

Am trying to think of reasons why ELASTICSEARCH-1 can't find the snapshot
repository. Maybe no other node can "find" the repository also, but the
command stops at this first node's failure.

Does the repository need to be registered using the steps to initialize an
empty repository before creating snapshots? If so, I assume the existing
repository data won't be wiped out (maybe the data should be moved,
repository created and then data moved back to the new registered
repository?)

Maybe Snapshot/Restore has not actually been trialed in a migration
scenario?

Note this lab exercise has similarities to another Forum post asking about
Cluster migration and his proposal to use Snapshot/Restore also.

Thx,
Tony

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/65f1f916-e413-4b59-abdd-55e98f64681f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

When you originally defined the snapshot, you did something like this (or
similar):

PUT http://localhost:9200/_snapshot/Snapshots
{
"type": "fs",
"settings": {
"location": "/blah"
}
}

On the new/empty cluster, this snapshot is not yet registered, so the first
thing you need to do is to run that again:

PUT http://localhost:9200/_snapshot/Snapshots
{
"type": "fs",
"settings": {
"location": "/blah"
}
}

Then after that, you should be able to restore to the new/empty cluster (as
long as that location "/blah" is accessible from the new cluster node).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/65f4f381-ec15-4083-9608-9ca02dfc10a8%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Cool.

Still not knowing whether the act of registering a snapshot repository
pointing to a populated directory might "initialize/wipe" the location, I
made a copy before registering again.

After registering the repository, I queried for existing snapshots and all
looks well, so it looks like registering is non-destructive.

Tony

On Friday, February 21, 2014 3:48:51 PM UTC-8, Binh Ly wrote:

When you originally defined the snapshot, you did something like this (or
similar):

PUT http://localhost:9200/_snapshot/Snapshots
{
"type": "fs",
"settings": {
"location": "/blah"
}
}

On the new/empty cluster, this snapshot is not yet registered, so the
first thing you need to do is to run that again:

PUT http://localhost:9200/_snapshot/Snapshots
{
"type": "fs",
"settings": {
"location": "/blah"
}
}

Then after that, you should be able to restore to the new/empty cluster
(as long as that location "/blah" is accessible from the new cluster node).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6d7dc569-7c04-4068-af81-9f59935ebf8d%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Wow!

Am amazed how performant a restore is compared to restarting a populated
cluster!

Existing cluster:
5 nodes, all master and data eligible
Configured, 5shards,0replica/index
Apache log data
330 indices
4361031 documents

current configuration to prevent cluster from considering re-allocation
until all nodes is the following (set to same number as total nodes in
cluster)

gateway.recover_after_nodes: 5

When restarting this cluster, I usually have trouble getting all nodes to
stay online during restart. This typically causes shard re-allocation,
followed possibly by more re-allocation.
Typically, minimally it takes 10 min for the cluster to recognize all 5
nodes
Followed by minimally 40 minutes (often more) to go green.

When restoring this cluster from snapshot
Since there is no data in the new cluster, 5 nodes are recognized almost
immmediately (no 10 min wait)
Restore (not completely successful, see below) is only about 20 minutes.
Not even a hint that any node was unhappy and might drop from the cluster.

Noticed - Shard initialization is 20 shards at a time, twice as many as
when restarting a populated cluster.

But, looks like two shards remain unallocated

  • A replica of a Marvel index (the Primary appears to have successfully
    initialized)
  • A primary of one of the Apache log indices.

Last time I experienced this, I simply deleted the problem index and
re-input the data (BTW - Input is by month, so to restore the one day
involves input 30 days. For the other 29 days, the version was
incremented which better answers a post to this Forum weeks ago when I
asked about this scenario)

In this scenario, I don't think I can re-load historical Marvel data (does
it even exist in a way that can be re-loaded?)
Also, am wondering if there is a better solution to "fixing" a single index
that refuses to initialize.

TIA,
Tony

On Friday, February 21, 2014 4:56:13 PM UTC-8, Tony Su wrote:

Cool.

Still not knowing whether the act of registering a snapshot repository
pointing to a populated directory might "initialize/wipe" the location, I
made a copy before registering again.

After registering the repository, I queried for existing snapshots and all
looks well, so it looks like registering is non-destructive.

Tony

On Friday, February 21, 2014 3:48:51 PM UTC-8, Binh Ly wrote:

When you originally defined the snapshot, you did something like this (or
similar):

PUT http://localhost:9200/_snapshot/Snapshots
{
"type": "fs",
"settings": {
"location": "/blah"
}
}

On the new/empty cluster, this snapshot is not yet registered, so the
first thing you need to do is to run that again:

PUT http://localhost:9200/_snapshot/Snapshots
{
"type": "fs",
"settings": {
"location": "/blah"
}
}

Then after that, you should be able to restore to the new/empty cluster
(as long as that location "/blah" is accessible from the new cluster node).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b1cd7695-bfce-4d57-bdc7-bac09b039071%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Update:

After shutting down the cluster and restarting, the problem resolved
itself. Both shards which refused to initialize earlier were able to
initialize on restart.

Maybe the lesson to be learned is that assuming that when the snapshot were
made, integrity was verified(I hope so!) so restore should almost certainly
be good.

Tony

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e3d1bef5-04be-4a7c-8a3a-3557c1dd324f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.