So, I have a three node cluster. Each of the nodes are master eligible, data nodes and ingest nodes. I did that because, well, I only have three nodes.
But I am thinking, do I really need my indices replicated in three places? Two should be fine, so now I want to turn my first node into a dedicated master, but the other two would still be master eligible in case that first node goes down.
Questions
If the first node went down and another node got elected master, how do I force my dedicated master to be given his old job back when he comes back online. I only want the new master to be temporarily acting master while the dedicated master heals.
When I do convert my fully data and ingest eligible node to dedicated master, it would be nice to get rid of all that data he's holding (the shards) since he's not going to be doing any searching or indexing anymore. Do I need to do anything or will Elastic know to dump those shards?
I assume he's still a coordinating node so that I can still send requests to him and have him forward appropriate requests to one of the other nodes in my cluster?
I did search for these answers but I didn't really see anything recent and I think some of those older answer might not be applicable anymore.
PUT /_cluster/settings
{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "10.29.248.230"
}
}
yes
but all in all. I would suggest to keep all master/data node in small cluster
master does not need lot of resources. data do. and by spreading your shard on three node you will gain some speed as well.
You can't, and indeed you do not want to do that. What if the "preferred" master was suffering an intermittent fault? If it were re-elected every time it came back and then failed again then your cluster would be so busy with elections that it wouldn't be able to do any useful work. It's much more robust to stick with the current master as long as possible.
Great, David, thanks. Do you also pretty much agree with the comment above saying it's probably not necessary to have a dedicated master on a three node cluster?
It depends. 2 master/data nodes plus one dedicated master node is a perfectly reasonable setup; if you want fault tolerance then you cannot have fewer than 3 master-eligible nodes or fewer than 2 data nodes, so you can't make it any smaller. Whether it's worth it is up to you - it may be cheaper to operate than 3 full master/data nodes, but may not be worth the extra orchestration complexity vs having 3 identical nodes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.