CAP theorem

Interesting. First of all, in most cases, you have both es clients ("native
clients") and the nodes within the same network switch, so the changes are
identical to terracotta for split brain with clients.

So, now I understand, TC has sync replication between active and passive
servers, and writing to disk (in async manner, I presume).

Yet another question in the TC case then. Lets assume you have two server,
active and passive, both, I assume, write to the local disk their state.
Now, you bring down the active server, the passive becomes active (master),
and clients starts writing to new active server.

Now, I bring down the last server, which is active, and afterwards, start
the first server, which, I assume, becomes active and starts to receive
client requests. Now, that server has an old view of the data, since clients
have been performing changes to the other server while it was down. How does
TC recover from that?

-shay.banon

On Sun, Jun 20, 2010 at 4:37 PM, Sergio Bossa sergio.bossa@gmail.comwrote:

On Sat, Jun 19, 2010 at 11:55 PM, Shay Banon
shay.banon@elasticsearch.com wrote:

Well, next time I am in Rome (well, never been, so first time I will be
there :wink: )... .

Anytime :wink:

Happy the answers make sense, btw, you did not answer
regarding the terracotta ones, this is something that I always wanted to
know about but could not find anything in the docs... .

Sorry, missed your questions :slight_smile:
Anyways, Terracotta should work as follows:

  1. In case of client or server failure, using async writes, there's no
    data loss provided you run an active/passive pair: the passive one
    will take the transaction over and complete it as the new active.
  2. In case of active/passive server partitioning, the currently active
    one will keep its clients connected with, while the passive one will
    elect itself as a master, but with no attached clients, and there will
    be so no split brain; once the partition heals again, the server which
    kept the attached clients will zap the other one and downshift it to
    passive state.
    In the end, you could have a split brain only if you had one server
    and a bunch of clients on one switch, and another server and another
    bunch of clients on another switch, and the switches get partitioned
    ... a pretty bizarre network configuration, provided you're not
    running in the cloud ... so, Terracotta also has its own split brain
    vulnerabilities, which are IMHO less common than the ES ones, but
    Terracotta is master based so it's easy to manage coordination, while
    ES is decentralized and yadda-yadda-yadda ... you get the idea :slight_smile:

Hope that answers your questions ... feel free to ask more obviously :wink:
Cheers!

Sergio B.

--
Sergio Bossa
http://www.linkedin.com/in/sergiob