Does anybody know if there is a way, when a node fails and shards need to
be recovered, of configuring ElasticSearch to prefer to recover from a node
sharing the same awareness attributes (e.g. same rack, same zone etc. a bit
like the "automatic preference when searching / getingedit" reference in
the user guide). We're typically seeing a lot of traffic between "zones"
when a failure occurs and I wondered why this was the case / if it was
avoidable. Maybe I'm missing something and recovery always needs to
replicate from the active primary?
Thanks in advance for any guidance or information,
Does anybody know if there is a way, when a node fails and shards need to
be recovered, of configuring Elasticsearch to prefer to recover from a node
sharing the same awareness attributes (e.g. same rack, same zone etc. a bit
like the "automatic preference when searching / getingedit" reference in
the user guide). We're typically seeing a lot of traffic between "zones"
when a failure occurs and I wondered why this was the case / if it was
avoidable. Maybe I'm missing something and recovery always needs to
replicate from the active primary?
Thanks in advance for any guidance or information,
That seems like a shame given the immutability of the underlying blocks.
Sure, the primary needs to identify the specific set of blocks to be
replicated but I don't see why the block data itself couldn't be pulled
from a "closer" replica if it exists there?
Does anybody know if there is a way, when a node fails and shards need to
be recovered, of configuring Elasticsearch to prefer to recover from a node
sharing the same awareness attributes (e.g. same rack, same zone etc. a bit
like the "automatic preference when searching / getingedit" reference in
the user guide). We're typically seeing a lot of traffic between "zones"
when a failure occurs and I wondered why this was the case / if it was
avoidable. Maybe I'm missing something and recovery always needs to
replicate from the active primary?
Thanks in advance for any guidance or information,
That seems like a shame given the immutability of the underlying blocks.
Sure, the primary needs to identify the specific set of blocks to be
replicated but I don't see why the block data itself couldn't be pulled
from a "closer" replica if it exists there?
Does anybody know if there is a way, when a node fails and shards need
to be recovered, of configuring Elasticsearch to prefer to recover from a
node sharing the same awareness attributes (e.g. same rack, same zone etc.
a bit like the "automatic preference when searching / getingedit" reference
in the user guide). We're typically seeing a lot of traffic between "zones"
when a failure occurs and I wondered why this was the case / if it was
avoidable. Maybe I'm missing something and recovery always needs to
replicate from the active primary?
Thanks in advance for any guidance or information,
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.