Explicitly Copying Replica Shards That Fail to Start

Greetings,

I am still having a problem with recovery of 5 replica shards in 2 indices
of mine, 3-way cluster. The replica shards fail to initialize and are
jumping around two secondary nodes. The primary shards are fine.

What is my path to recovery? Is copying master shard to secondary nodes a
correct way? I tried issuing routing commands to cancel
recovery/allocation, it helped with some secondary shards but not with the
5 in question.

I also tried dumping index with failing secondary shards but two nodes
crashed (well, lost connection to cluster) so dump failed.

Would setting replica # to 0, copying masters to 2 nodes and setting
replica # to 1 a viable alternative?

Thank you,

David

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yep, the easiest way is to drop the replica and then add it back and see
how you go.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 29 August 2014 08:40, David Kleiner david.kleiner@gmail.com wrote:

Greetings,

I am still having a problem with recovery of 5 replica shards in 2 indices
of mine, 3-way cluster. The replica shards fail to initialize and are
jumping around two secondary nodes. The primary shards are fine.

What is my path to recovery? Is copying master shard to secondary nodes a
correct way? I tried issuing routing commands to cancel
recovery/allocation, it helped with some secondary shards but not with the
5 in question.

I also tried dumping index with failing secondary shards but two nodes
crashed (well, lost connection to cluster) so dump failed.

Would setting replica # to 0, copying masters to 2 nodes and setting
replica # to 1 a viable alternative?

Thank you,

David

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624ZpLMWPg95joA023WT3hS7AsS1x4%3DN4E5UUWuyt_LAWtg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thank you Mark!

Setting

{
"index" : {
"number_of_replicas" : 0
} }

and then back to 1 cleared the bad replicas and rebuilt them from primaries.

Much appreciated,

David

On Thursday, August 28, 2014 3:53:32 PM UTC-7, Mark Walkom wrote:

Yep, the easiest way is to drop the replica and then add it back and see
how you go.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 29 August 2014 08:40, David Kleiner <david....@gmail.com <javascript:>>
wrote:

Greetings,

I am still having a problem with recovery of 5 replica shards in 2
indices of mine, 3-way cluster. The replica shards fail to initialize and
are jumping around two secondary nodes. The primary shards are fine.

What is my path to recovery? Is copying master shard to secondary nodes
a correct way? I tried issuing routing commands to cancel
recovery/allocation, it helped with some secondary shards but not with the
5 in question.

I also tried dumping index with failing secondary shards but two nodes
crashed (well, lost connection to cluster) so dump failed.

Would setting replica # to 0, copying masters to 2 nodes and setting
replica # to 1 a viable alternative?

Thank you,

David

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a2482a81-5be8-4ed2-ad43-e37330446376%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I used to apply that "trick" all the time with older versions of
Elasticsearch! Thankfully it has not occurred to me in years.

--
Ivan

On Thu, Aug 28, 2014 at 3:53 PM, Mark Walkom markw@campaignmonitor.com
wrote:

Yep, the easiest way is to drop the replica and then add it back and see
how you go.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 29 August 2014 08:40, David Kleiner david.kleiner@gmail.com wrote:

Greetings,

I am still having a problem with recovery of 5 replica shards in 2
indices of mine, 3-way cluster. The replica shards fail to initialize and
are jumping around two secondary nodes. The primary shards are fine.

What is my path to recovery? Is copying master shard to secondary nodes
a correct way? I tried issuing routing commands to cancel
recovery/allocation, it helped with some secondary shards but not with the
5 in question.

I also tried dumping index with failing secondary shards but two nodes
crashed (well, lost connection to cluster) so dump failed.

Would setting replica # to 0, copying masters to 2 nodes and setting
replica # to 1 a viable alternative?

Thank you,

David

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/8e7c4f11-2790-49d6-8c65-87e9aa05aa3b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624ZpLMWPg95joA023WT3hS7AsS1x4%3DN4E5UUWuyt_LAWtg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624ZpLMWPg95joA023WT3hS7AsS1x4%3DN4E5UUWuyt_LAWtg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQCTZzDLj-roqZYV40sf8QrF7K_OB1oOAAVNv8N7m9zp-A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.