Elasticearch: Load balanced 2 master nodes

Hello,

I want to run loadbalancing on our elasticSearch , in Nest I use a SniffingConnectionPool, that should help our client choose an active node. This seems to work.

However I’m having problems setting up my nodes, I have a node on my two server. I want them both to be able to work independent of the other and make sure they both contain all the data and sync it between themselves. So that if one of the servers goes down the clients don’t notice this. I cannot get this to work as I want, in my current setup when I shutdown one of the servers shards get lost and so does the data they contain.

Server a

bootstrap.memory_lock: false
cluster.name: elasticsearch
http.port: 9200
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: VM-SOADEV03
path.data: C:\ProgramData\Elastic\Elasticsearch\data
path.logs: C:\ProgramData\Elastic\Elasticsearch\logs
transport.tcp.port: 9300
network.bind_host: 0
network.host: 0
network.publish_host: 0
discovery.zen.ping.unicast.hosts: ["VM-SOADEV04"]

Server b

bootstrap.memory_lock: false
cluster.name: elasticsearch
http.port: 9200
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: VM-SOADEV04
path.data: C:\ProgramData\Elastic\Elasticsearch\data
path.logs: C:\ProgramData\Elastic\Elasticsearch\logs
transport.tcp.port: 9300
network.bind_host: 0
network.host: 0
network.publish_host: 0
discovery.zen.ping.unicast.hosts: ["VM-SOADEV03"]

On the index I have set the number_of_replicas to 1 and the number_of_shards to 3.

Can anyone help me out, what settings do I need?

Thank you

Schoof

The first thing to do is to look at your both logs.

Could you share them please? Please use </> icon to format and not the citation icon.

Hi

Thanks for the quick reply!

We restarted both services to get a clean log. Then we disabled the VM-SOADEV03 and then the data was lost.

We cannot include our log because it is too big. I put them in a gist:

Thank you

Schoof

It sounds like that you are running low on space on your machines:

[2018-01-16T09:34:06,298][WARN ][o.e.c.r.a.DiskThresholdMonitor] [VM-SOADEV04] high disk watermark [90%] exceeded on [uXIJFTi7Q9KntaZuHDtiYQ][VM-SOADEV03][C:\ProgramData\Elastic\Elasticsearch\data\nodes\0] free: 2.8gb[7.1%], shards will be relocated away from this node
[2018-01-16T09:34:06,298][WARN ][o.e.c.r.a.DiskThresholdMonitor] [VM-SOADEV04] high disk watermark [90%] exceeded on [hKdY08xETjmgB-O_iuP5eQ][VM-SOADEV04][C:\ProgramData\Elastic\Elasticsearch\data\nodes\0] free: 3.7gb[9.4%], shards will be relocated away from this node
[2018-01-16T09:34:06,298][INFO ][o.e.c.r.a.DiskThresholdMonitor] [VM-SOADEV04] rerouting shards: [high disk watermark exceeded on one or more nodes]

Also, it's not recommended to run with 2 nodes. You should add a 3rd one, even small and master only. Then set discovery.zen.minimum_master_nodes to 2.

[2018-01-16T09:34:06,251][WARN ][o.e.d.z.ElectMasterService] [VM-SOADEV04] value for setting "discovery.zen.minimum_master_nodes" is too low. This can result in data loss! Please set it to at least a quorum of master-eligible nodes (current value: [-1], total number of master-eligible nodes used for publishing in this round: [2])

Thank you, we expanded our storage and everything seems to work now! :slight_smile:

Why is this recommended?

We should just add 1 other node with the following settings then?

node.data: true
node.ingest: true
node.master: false
discovery.zen.ping.unicast.hosts: ["VM-SOADEV04, VM-SOADEV03"]

Or am I seeing this wrong?

Why is this recommended?

The WARN message tells you that I think:

value for setting "discovery.zen.minimum_master_nodes" is too low. This can result in data loss! Please set it to at least a quorum of master-eligible nodes (current value: [-1], total number of master-eligible nodes used for publishing in this round: [2])

We should just add 1 other node with the following settings then?

No. You should set:

node.data: false
node.ingest: false
node.master: true
discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]
discovery.zen.minimum_master_nodes: 2

And on the other nodes:

discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]
discovery.zen.minimum_master_nodes: 2
1 Like

Thank you!

But that doesn't explain why we should add another node? What does the extra, master only, node add? Do we risk data loss with 'only' 2 master and data nodes?

If we add a third master only node, can it be on one of the previous servers? And if that servers crashes, does the other node still have all the data?

I'm just being curious here and want to understand this fully. :slight_smile:

In order to avoid split-brain scenarios and resulting data loss, Elasticsearch requires a majority of master eligible nodes to be available in order to elect a master node. With only 2 master eligible nodes the majority is 2 nodes. This means that you can not elect a master if one of the nodes is missing.

Once you have 3 master eligible nodes in the cluster, the size of the majority is still 2, which means you can lose one node and still be able to elect a master.

Does the third (master only) node need to be on a separate server? To be certain that when one server is down, everything is still consistent?

So if I get this right we should set up:
Server one:

node.data: true
node.ingest: true
node.master: true
discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]
discovery.zen.minimum_master_nodes: 2

Server two:

node.data: true
node.ingest: true
node.master: true
discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]
discovery.zen.minimum_master_nodes: 2

Server three:

node.data: false
node.ingest: false
node.master: true
discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]
discovery.zen.minimum_master_nodes: 2

Then we should be safe if any of the servers are down?

Yes, I think that looks good. Your 2 data nodes will both hold a full set of the data as long as you have 1 replica enabled and you can lose any of the nodes while still keeping a majority of master eligible nodes available to elect a master.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.