Hi, all. We want to add 2 more (master-eligible) nodes to our single-server cluster. Some experimentation on multiple dummy servers showed that the only thing I need to do for that is add discovery.zen.ping.unicast.hosts listing all three nodes. The problem is that we don't want to do it with any downtime, without having to restart the current master. Further experimentation showed that I can set the mentioned setting on a new server only, and it will happily join the cluster with the master that does NOT have that setting set. What does the fact that the current master doesn't have it set really mean in practical terms? Is it enough to set it in elasticsearch.yml for its restarts in the future and let it run without it for some time? Thanks.
That's sufficient in version 7 (and later) but not in earlier versions. I think you're using an earlier version, because discovery.zen.ping.unicast.hosts
is deprecated in v7 in favour of discovery.seed_hosts
.
In versions 6 and earlier you are right that you should set discovery.zen.ping.unicast.hosts
on all three nodes, but the vitally important setting is discovery.zen.minimum_master_nodes
. If you set that wrongly then your cluster can split into pieces and lose data as a result.
You should set discovery.zen.minimum_master_nodes: 2
in the two new nodes' config files. Then you should start one of the nodes and let it join the existing master. As soon as possible after that you should also set discovery.zen.minimum_master_nodes
to 2
via the cluster settings API as well:
PUT _cluster/settings
{"persistent":{"discovery.zen.minimum_master_nodes": 2}}
Then, and only then, you should start the other new node and let it join the cluster. Once it's there, you should set discovery.zen.minimum_master_nodes: 2
in the original master's config file and restart it. Once you've restarted this node you can safely remove the cluster setting:
PUT _cluster/settings
{"persistent":{"discovery.zen.minimum_master_nodes": null}}
Thanks a lot for the thorough reply. Yes, we're using 6.5.4, it's the only version available via FreeBSD ports.
But shouldn't it be set to N/2+1 at all times?
Yes, but by this point in the process it is set to 2 on each node, so there's no need for it also to be set in the cluster settings.
One last question if I may: can we set discovery.zen.ping.unicast.hosts
dynamically on the current master without having to restart it? And simply repeat the setting in it elasticsearch.yml for it to persist. It's just that our current Ruby on Rails client code isn't ready to cycle through all possible ES servers and only uses the single master node. We're planning on extending that a bit later.
No, that's not possible. This setting can only be set in elasticsearch.yml
.
Since setting unicast.hosts on the current master and restarting it should be done quickly after bringing in the other 2 nodes, can I do them backwards? Take the 2 new nodes down, set unicast.hosts on the current master first, restart it, see if it still works, and then, maybe a day later, bring the first server up with unicast.hosts properly configured and minimum_master_nodes set to 2, also set minimum_master_nodes to 2 via the API for the master to be aware of it, repeating it in its elasticsearch.yml, and finally after their status turns green run the 3rd node with unicast.hosts minimum_master_nodes set same as on the 2nd node?
I think you are perhaps confusing discovery.zen.ping.unicast.hosts
and discovery.zen.minimum_master_nodes
? It's very important to set minimum_master_nodes
correctly as soon as possible, but the unicast.hosts
setting is quite tolerant of being set wrongly. It particularly tolerates having extra entries that don't (yet) correspond with running nodes.
Sorry, then I probably misunderstood you when you wrote that I absolutely had to restart the current master. The only reason to restart it would be to apply the unicast.hosts change on it, because minimum_master_nodes can be set dymamically also. But now I see that unicast.hosts will probably be needed if a network outage separates current master from the other two: they would elect a new master between them (since their number suffices minimum_master_nodes=2), and the current master would remain the master not knowing of other parties because of the unchanged unicast.hosts - split brain.
Yes, this still indicates some confusion over unicast.hosts
vs minimum_master_nodes
. The reason a master might remain a master on its own is because it thinks minimum_master_nodes
is 1 and nothing at all to do with unicast.hosts
. You could put 3, or 10, or 100 entries in unicast.hosts
, and it would make no difference to this situation.
Can I set minimum_master_nodes=2 on the master before doing anything else? It wouldn't prevent it from functioning in a single-server scenario?
No - that's the problem. If you were to set minimum_master_nodes: 2
on a solitary master then it wouldn't be able to be master any more, because with that setting you require at least two master-eligible nodes.
Wow, does that mean that if in the future 2 of the 3 master-eligible servers die off or become unreachable, the single remaning server will be unable to function if it happens NOT to be the master? What's the point in having fault tolerance then? )
Not quite. If 2 of the 3 master-eligible nodes fail then the remaining node will not function as a master whether it was the master beforehand or not. It'd have no way of knowing if the 2 other nodes are really gone or whether it's just suffering from a network partition, and therefore it cannot tell whether the other 2 nodes have formed themselves into a cluster on the other side of the partition. The only way to avoid a split-brain situation (i.e. data loss) is to stand down.
If you have 3 master-eligible nodes then you can tolerate the loss of at most one of them. This isn't a limitation imposed by Elasticsearch - it's theoretically optimal. If you want to be able to tolerate the loss of 2 masters then you need at least 5 master-eligible nodes.
I let them join the cluster and cluster status turned green. I'm a bit embarrassed by the fact that disk usage (du -sh) of /var/db/elasticsearch is 17G on the master and only 5.8-5.9GB on the slaves.
$ curl -X GET http://titan.local:9200/_cluster/health
{"cluster_name":"foo","status":"green","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":5,"active_shards":10,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.