Snapshot & Restore in a cluster of two nodes

Hello,

we have a cluster of two nodes. Every index in this cluster consists of 2
shards and one replica. We want to make use of snapshots & restore to
transfer data between two clusters. When we make our snapshots on node one
only the primary shard is included, the replica shard is missing. While
restoring on the other cluster the process breaks because of the missing
second shard.
Do we have to make a snapshot for each node to include both primary shards
so that we can restore the whole index or am i missing something here?

Thanks in advance
Daniel

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hey,

can you be more precise and create a fully fledged example (generating the
repository, executing the snapshot on cluster one, executing restore on
cluster 2, etc) and include the concrete error message in order to find out
what 'the process breaks' means here? Also provide info about elasticsearch
and jvm versions. Thanks!

Snapshots are always done per index (the primary shards) and not per node,
so there must be something else going on.
Is it possible that only one node has write access to the repository?

--Alex

On Thu, Jun 19, 2014 at 3:36 PM, Daniel Bubenheim <
daniel.bubenheim@googlemail.com> wrote:

Hello,

we have a cluster of two nodes. Every index in this cluster consists of 2
shards and one replica. We want to make use of snapshots & restore to
transfer data between two clusters. When we make our snapshots on node one
only the primary shard is included, the replica shard is missing. While
restoring on the other cluster the process breaks because of the missing
second shard.
Do we have to make a snapshot for each node to include both primary shards
so that we can restore the whole index or am i missing something here?

Thanks in advance
Daniel

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM8sq_eZ6g1sGhau%3DO2%3D93t%2Bz2yOtqiXxb7xMA9mrchuYg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi Alex,

thanks for your answer. Here are some more detailed facts:

  1. I generate a repository on node1 in cluster1
  2. execute snapshot on node1 in cluster1
  3. execute restore on node1 in cluster2

the error message is (server-url was edited), index name is "bel-en":

[2014-06-24 23:48:56,197][WARN ][cluster.action.shard ] [coordinator]
[bel-en][0] received shard failed for [bel-en][0],
node[e6sRC7OzRnq1XswwYjZ1JQ], [P], restoring[bel-en:2014-06-24_23_48_52],
s[INITIALIZING], indexUUID [a6dQi6kDTI-xvlyM6NRq8Q], reason [Failed to
start shard, message [IndexShardGatewayRecoveryException[[bel-en][0] failed
recovery]; nested: IndexShardRestoreFailedException[[bel-en][0] restore
failed]; nested: IndexShardRestoreFailedException[[bel-en][0] failed to
restore snapshot [2014-06-24_23_48_52]]; nested:
IndexShardRestoreFailedException[[bel-en][0] failed to read shard snapshot
file]; nested:
FileNotFoundException[http://{server-url}/bel-en/indices/bel-en/0/snapshot-2014-06-24_23_48_52];
]]
[2014-06-24 23:48:56,519][WARN ][cluster.metadata ] [coordinator]
[bel-en] re-syncing mappings with cluster state for types [[product]]

Directory http://{server-url}/bel-en/indices/bel-en/0 is empty.

JVM-Version is the same on all nodes.

Elasticsearch runs in Version 1.1.1

Write access should be there, we registered the fs-repository on both nodes
of the cluster, pointing to a directory on of them.

Thanks for your help.

Daniel

Am Freitag, 20. Juni 2014 10:12:49 UTC+2 schrieb Alexander Reelsen:

Hey,

can you be more precise and create a fully fledged example (generating the
repository, executing the snapshot on cluster one, executing restore on
cluster 2, etc) and include the concrete error message in order to find out
what 'the process breaks' means here? Also provide info about elasticsearch
and jvm versions. Thanks!

Snapshots are always done per index (the primary shards) and not per node,
so there must be something else going on.
Is it possible that only one node has write access to the repository?

--Alex

On Thu, Jun 19, 2014 at 3:36 PM, Daniel Bubenheim <
daniel.b...@googlemail.com <javascript:>> wrote:

Hello,

we have a cluster of two nodes. Every index in this cluster consists of 2
shards and one replica. We want to make use of snapshots & restore to
transfer data between two clusters. When we make our snapshots on node one
only the primary shard is included, the replica shard is missing. While
restoring on the other cluster the process breaks because of the missing
second shard.
Do we have to make a snapshot for each node to include both primary
shards so that we can restore the whole index or am i missing something
here?

Thanks in advance
Daniel

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1f37e455-b639-47f6-9cab-2f7930408f61%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hey,

didnt you create the repository on cluster2 as well? Otherwise the second
cluster does not have a location to lookup the files (which might explain
the file not found exception). Though I guess you did, as you were able to
trigger the snapshot, can you check for permissions?

Again it is really hard if you do not provide a fully fledged example.
Please see Elasticsearch Platform — Find real-time answers at scale | Elastic in what is required. It makes
it much easier for others to follow your steps and give advice.

--Alex

On Wed, Jun 25, 2014 at 12:30 AM, Daniel Bubenheim <
daniel.bubenheim@googlemail.com> wrote:

Hi Alex,

thanks for your answer. Here are some more detailed facts:

  1. I generate a repository on node1 in cluster1
  2. execute snapshot on node1 in cluster1
  3. execute restore on node1 in cluster2

the error message is (server-url was edited), index name is "bel-en":

[2014-06-24 23:48:56,197][WARN ][cluster.action.shard ] [coordinator]
[bel-en][0] received shard failed for [bel-en][0],
node[e6sRC7OzRnq1XswwYjZ1JQ], [P], restoring[bel-en:2014-06-24_23_48_52],
s[INITIALIZING], indexUUID [a6dQi6kDTI-xvlyM6NRq8Q], reason [Failed to
start shard, message [IndexShardGatewayRecoveryException[[bel-en][0] failed
recovery]; nested: IndexShardRestoreFailedException[[bel-en][0] restore
failed]; nested: IndexShardRestoreFailedException[[bel-en][0] failed to
restore snapshot [2014-06-24_23_48_52]]; nested:
IndexShardRestoreFailedException[[bel-en][0] failed to read shard snapshot
file]; nested: FileNotFoundException[http://{server-url}/bel-en/indices/bel-en/0/snapshot-2014-06-24_23_48_52];
]]
[2014-06-24 23:48:56,519][WARN ][cluster.metadata ] [coordinator]
[bel-en] re-syncing mappings with cluster state for types [[product]]

Directory http://{server-url}/bel-en/indices/bel-en/0 is empty.

JVM-Version is the same on all nodes.

Elasticsearch runs in Version 1.1.1

Write access should be there, we registered the fs-repository on both
nodes of the cluster, pointing to a directory on of them.

Thanks for your help.

Daniel

Am Freitag, 20. Juni 2014 10:12:49 UTC+2 schrieb Alexander Reelsen:

Hey,

can you be more precise and create a fully fledged example (generating
the repository, executing the snapshot on cluster one, executing restore on
cluster 2, etc) and include the concrete error message in order to find out
what 'the process breaks' means here? Also provide info about elasticsearch
and jvm versions. Thanks!

Snapshots are always done per index (the primary shards) and not per
node, so there must be something else going on.
Is it possible that only one node has write access to the repository?

--Alex

On Thu, Jun 19, 2014 at 3:36 PM, Daniel Bubenheim <
daniel.b...@googlemail.com> wrote:

Hello,

we have a cluster of two nodes. Every index in this cluster consists of
2 shards and one replica. We want to make use of snapshots & restore to
transfer data between two clusters. When we make our snapshots on node one
only the primary shard is included, the replica shard is missing. While
restoring on the other cluster the process breaks because of the missing
second shard.
Do we have to make a snapshot for each node to include both primary
shards so that we can restore the whole index or am i missing something
here?

Thanks in advance
Daniel

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/fb1b3a48-250c-46bc-9a4a-8a9ccd582164%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1f37e455-b639-47f6-9cab-2f7930408f61%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1f37e455-b639-47f6-9cab-2f7930408f61%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM9rePb7oaJ_HNJkq0%3D9RGWa3FBdBs6P8_b8wBjkBJY%3Ddg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.