When doing maintenance on a cluster [example: minor version ES upgrade on
all nodes], the cluster is first shutdown, the maintenance is performed,
and ES on the nodes are started in quick succession. The cluster returns to
yellow in a few minutes, but it takes hours to get back to green. The
reason is it's re-replicating all the primary shards. For this example,
there's one replica per shard on all the indexes. Is there an ES setting or
patch that would have ES re-use the replicas on the nodes at cluster
restart?
When doing maintenance on a cluster [example: minor version ES upgrade on
all nodes], the cluster is first shutdown, the maintenance is performed,
and ES on the nodes are started in quick succession. The cluster returns to
yellow in a few minutes, but it takes hours to get back to green. The
reason is it's re-replicating all the primary shards. For this example,
there's one replica per shard on all the indexes. Is there an ES setting
or patch that would have ES re-use the replicas on the nodes at cluster
restart?
Hey Mark, thanks for your quick response. I'm a bit confused, though -- on
a cluster restart, the transient settings will be forgotten, and I would
think the nodes would come up with all shards (primary and replica) still
"initializing". Are you suggesting that by setting the disable_allocation =
true before the cluster shutdown, the replica shards on disk will be used
on the following restart?
On Thursday, September 26, 2013 5:49:09 PM UTC-7, Mark Walkom wrote:
On 27 September 2013 10:43, Andrew Voelker <avoe...@gmail.com<javascript:>
wrote:
When doing maintenance on a cluster [example: minor version ES upgrade on
all nodes], the cluster is first shutdown, the maintenance is performed,
and ES on the nodes are started in quick succession. The cluster returns to
yellow in a few minutes, but it takes hours to get back to green. The
reason is it's re-replicating all the primary shards. For this example,
there's one replica per shard on all the indexes. Is there an ES setting
or patch that would have ES re-use the replicas on the nodes at cluster
restart?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
Hey Mark, thanks for your quick response. I'm a bit confused, though --
on a cluster restart, the transient settings will be forgotten, and I would
think the nodes would come up with all shards (primary and replica) still
"initializing". Are you suggesting that by setting the disable_allocation =
true before the cluster shutdown, the replica shards on disk will be used
on the following restart?
On Thursday, September 26, 2013 5:49:09 PM UTC-7, Mark Walkom wrote:
When doing maintenance on a cluster [example: minor version ES upgrade
on all nodes], the cluster is first shutdown, the maintenance is performed,
and ES on the nodes are started in quick succession. The cluster returns to
yellow in a few minutes, but it takes hours to get back to green. The
reason is it's re-replicating all the primary shards. For this example,
there's one replica per shard on all the indexes. Is there an ES
setting or patch that would have ES re-use the replicas on the nodes at
cluster restart?
Unfortunately, disabling allocations around a node restart did not help.
The replicas were still re-replicated, after the node was restarted,
rejoined the cluster, and allocations were re-enabled.
Does anyone else have an idea on how to force a node to re-use replica
shards that were on a node prior to the node's shutdown and restart?
Regards,
Andrew
On Friday, September 27, 2013 3:03:47 AM UTC-7, Mark Walkom wrote:
We generally do a rolling restart of the cluster so the transitory nature
doesn't impact us.
But if you do keep at least one master active, any nodes that rejoin the
cluster will initialise the shards locally and minimise any reallocation.
On 27 September 2013 11:11, Andrew Voelker <avoe...@gmail.com<javascript:>
wrote:
Hey Mark, thanks for your quick response. I'm a bit confused, though --
on a cluster restart, the transient settings will be forgotten, and I would
think the nodes would come up with all shards (primary and replica) still
"initializing". Are you suggesting that by setting the disable_allocation =
true before the cluster shutdown, the replica shards on disk will be used
on the following restart?
On Thursday, September 26, 2013 5:49:09 PM UTC-7, Mark Walkom wrote:
When doing maintenance on a cluster [example: minor version ES upgrade
on all nodes], the cluster is first shutdown, the maintenance is performed,
and ES on the nodes are started in quick succession. The cluster returns to
yellow in a few minutes, but it takes hours to get back to green. The
reason is it's re-replicating all the primary shards. For this example,
there's one replica per shard on all the indexes. Is there an ES
setting or patch that would have ES re-use the replicas on the nodes at
cluster restart?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
It would be nice if it was possible to put custer in sercuce mode so it qould reject all requests and then shut it down safely
Then upgrade and start all the nodes so they would start in service mode and then issue command to exit service mode which would then recover all shards with hopefully nothing to recover as everything would be in synch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.