What if dedicated master node goes down and another node takes over as master?

So, I have a three node cluster. Each of the nodes are master eligible, data nodes and ingest nodes. I did that because, well, I only have three nodes.

But I am thinking, do I really need my indices replicated in three places? Two should be fine, so now I want to turn my first node into a dedicated master, but the other two would still be master eligible in case that first node goes down.

Questions

  1. If the first node went down and another node got elected master, how do I force my dedicated master to be given his old job back when he comes back online. I only want the new master to be temporarily acting master while the dedicated master heals.
  2. When I do convert my fully data and ingest eligible node to dedicated master, it would be nice to get rid of all that data he's holding (the shards) since he's not going to be doing any searching or indexing anymore. Do I need to do anything or will Elastic know to dump those shards?
  3. I assume he's still a coordinating node so that I can still send requests to him and have him forward appropriate requests to one of the other nodes in my cluster?

I did search for these answers but I didn't really see anything recent and I think some of those older answer might not be applicable anymore.

Thanks!

I have gone through this process in last few month. you want get straight answer but it is good that you been reading will help
this is your guide to set them up
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html

  1. dedicated master eligible node

  2. to get rid if all data, empty it out first

      PUT /_cluster/settings
     {
            "transient" :{
            "cluster.routing.allocation.exclude._ip" : "10.29.248.230"
     }
     }
    
  3. yes

but all in all. I would suggest to keep all master/data node in small cluster
master does not need lot of resources. data do. and by spreading your shard on three node you will gain some speed as well.

Thanks! Yeah, that was how I originally ended up with this configuration, because I thought the same thing, with such a small cluster.

You can't, and indeed you do not want to do that. What if the "preferred" master was suffering an intermittent fault? If it were re-elected every time it came back and then failed again then your cluster would be so busy with elections that it wouldn't be able to do any useful work. It's much more robust to stick with the current master as long as possible.

The process you should use is documented in the manual here.

Yes.

Great, David, thanks. Do you also pretty much agree with the comment above saying it's probably not necessary to have a dedicated master on a three node cluster?

It depends. 2 master/data nodes plus one dedicated master node is a perfectly reasonable setup; if you want fault tolerance then you cannot have fewer than 3 master-eligible nodes or fewer than 2 data nodes, so you can't make it any smaller. Whether it's worth it is up to you - it may be cheaper to operate than 3 full master/data nodes, but may not be worth the extra orchestration complexity vs having 3 identical nodes.

Reasonable answer, thanks again David.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.