Elasticsearch split brain - adding a 3rd node with node.data=false

Hello,

We have a 2 nodes ES cluster.
Each node is in a separate datacenter.
There has been an issue with the network that connects the 2 datacenters and now we have a split brain issue.

To avoid the split brain problem in the future, we are thinking about adding a 3rd node in the cluster.
The sole purpose of the node would be to avoid the split brain problem, so we want use the minimum needed hardware resources for the 3rd node.
ES node 1 [already deployed in Datacenter 1]: 32 GB, 4 cores, node.master: true, node.data: true
ES node 2 [already deployed in Datacenter 2]: 32 GB, 4 cores, node.master: true, node.data: true
ES node 3 [about to be deployed in Datacenter 1]: 8 GB, 2 cores, node.master:true, node.data: false

Q1: Is the above setup with 3 ES nodes avoiding "split brain" issues?

Q2: Are there any problems with the above setup regarding the cluster High Availability?
i.e., Is everything fine when:
a. (ES node 1) fails or
b. (ES node 2) fails or
c. the network between the datacenters fails

Q3: What happens when both (ES node 1) and (ES node 2) are down and (ES node 3) is up? Are applications making use of the ES cluster receiving an error?

Thank you.
Regards,
Liviu

You can avoid split-brain scenarios with the current setup by setting discovery.zen.minimum_master_nodes to 2 according to these guidelines. This does however not give HA as the cluster will reject writes if one of the node is missing.

Yes, assuming you have set discovery.zen.minimum_master_nodes to 2.

One of the master/data nodes and the dedicated master node would be up and connected, so would continue to accept reads and writes. The single node on the wrong side of the network partition would however not be able to accept writes.

If you have no data nodes available you are in trouble.

1 Like

Christian, thanks for your quick reply.

Yes, I forgot to mention setting discovery.zen.minimum_master_nodes to 2.

This does however not give HA as the cluster will reject writes if one of the node is missing.

One of the master/data nodes and the dedicated master node would be up and connected, so would continue to accept reads and writes. The single node on the wrong side of the network partition would however not be able to accept writes.

It is not clear for me how the HA is affected for the 3 nodes cluster.
Could you, please, describe how the HA is affected in each of the cases:
A. ES 1 node fails
B. ES 2 node fails
C. ES 3 node fails

Thank you!
Regards,
Liviu

You can lose any one of the three nodes, and as you have two master-eligible node remaining, the cluster will continue to take reads and writes.

Note that deploying Elasticsearch across data centres is not recommended unless they are connected via very good latency and throughput.

1 Like

Not to complicate the situation but have you thought about just clouding it? From the size of your nodes I suspect your capacity puts you in the realms of this being cost effective, you may be billed for data in and out of the Data Centre but you're going to be anyway with a link between the two and if you have it in the cloud you don't have to worry about the stability of the link between data centres just that it can get on the web.

1 Like

Christian, one more question, really sorry for repeating myself.

The single node on the wrong side of the network partition would however not be able to accept writes.

Are you saying that if datacenter 1 goes down (ES node 1 & ES node 3 are down) we are unable to user our ES cluster?
We really want to keep our ES cluster going when one of the datacenters is unavailable.

Regards,
Liviu

setting discovery.zen.minimum_master_nodes to 2 means you need 2 nodes to be up to run the cluster, if you only have 1 node in data centre 2 and data centre 1 is offline then this will mean that you only have 1 node up and that 1 node will not be able to form a cluster.

if you want to keep things HA without split brain the only real options open to you are a node in a 3rd data centre (or as a VM hosted in the cloud which functions as master only) which will allow you fault tollerance of 1 of the 3 data centres going dark or get elastic as a service from elastic who take care of all the infrastructure stuff for you.

2 Likes

You can still query your data, but not update or add to it. If you want to have high availability for reads and writes even if a full data centre goes down you need to deploy across 3 data centres.

The other option is to set up two independent clusters and update them in parallel. If you use a message queue here, you can buffer up changes and new data if one cluster is temporarily unavailable.

Thank you all for your help!

My conclusions:

  • We are unable to prevent a split brain by adding a 3rd ES node, because we have 2 datacenters for ES node deployment (we would need 3 datacenters). If datacenter1 (with 2 nodes) goes down, then the ES cluster becomes unavailable for writes, as I understand.
  • In a 3 nodes ES cluster with discovery.zen.minimum_master_nodes=2, at least 2 nodes need to be up for the cluster to be working correctly.
  • In order to preserve our HA when one of the datacenters goes down, it is better for us to keep our 2 nodes ES cluster and reindex the ES cluster when something goes bad.

I hope I am correct.
Liviu

Correct.

Correct.

No, I was suggesting having two separate clusters, one per data centre. You would then index into them in parallel and use a message queue for buffering so you can handle temporary interruptions or slowdowns.

1 Like

Christian, thanks for all the help.
My last conclusions was a personal one, not a conclusion drawn from your advice.

OK, so a possible solution would be to deploy 2 ES clusters on each datacenter. I believe we should (somehow) handle the message queue for buffering and the replication between ES clusters, I suppose no "ready to use" mechanisms are available for these 2 needs (message queue for ES requests, replication between clusters).
Again, thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.