Can a replica be updated with the deltas only?


(Yves Dorfsman) #1

When I shutdown a node that holds a replica and updates are happening to
the rest of the cluster, then re-start this node, it seems that the entire
replica is being copied again to that node.

Is there a way to make ES just update that node with the updates that
happened while it was down?

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2d6138a0-5b4d-4ab4-9ef8-2f94beaef241%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Binh Ly-2) #2

I don't believe this is possible. ES does sync replication by default and
then when a replica is down while updates/inserts are coming in, that
replica is simply invalidated and then fully recovered later once it comes
back up.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a9734310-58d9-4f0f-bc98-f7ef9bf7bd29%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Clinton Gormley) #3

If a replica recovers from the primary, then the node hosting it is shut
down shortly thereafter, when it comes back up it will only copy the
segments that have changed in the interim period.

However, merges happen independently on the primary and the replica. When a
replica has been running for a long time, its segments will have diverged
from those of the primary, and so more segments need to be copied across.

In the future we hope to improve this process.

On 10 March 2014 17:11, Binh Ly binhly_es@yahoo.com wrote:

I don't believe this is possible. ES does sync replication by default and
then when a replica is down while updates/inserts are coming in, that
replica is simply invalidated and then fully recovered later once it comes
back up.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a9734310-58d9-4f0f-bc98-f7ef9bf7bd29%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/a9734310-58d9-4f0f-bc98-f7ef9bf7bd29%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAPt3XKR1By1eO61z7Knju4_QAQdUEjT0t5_4LAh%3D%2BXNkAUVkmw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Binh Ly-2) #4

I stand corrected. Clint is right. ES will try to apply only diffs as much
as possible at the segment level. But if your underlying segments have
diverged significantly since the replica node went down, it is likely that
you'll end up with copying a lot more than the diffs (document-wise).
Otherwise, it'll just copy segments that have changed. :slight_smile:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3557d64f-b326-4776-ba8e-42e923d2b30a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(system) #5