Cluster master self-destructs soon

We've been running an ES cluster on AWS using the cloud-aws plugin and EC2 discovery.
Trouble started when we received an email notifying us about one of the servers in cluster retiring (which means AWS gives us a warning about the instance being pulled apart sometime soon)
To ready myself for the event I've created an image of the instance, launched a new instance from that image, and reconfigured it to join the cluster.
Now that I have 4 nodes in my cluster, I was ready to remove the faulty instance.
Alas, as I was getting ready to shut it down I noticed it is the master in my cluster.
Now, I'm still pretty new to ES, so I don't know exactly what are the consequences of dropping the master from the cluster.
My questions are - is there a way to nominate another instance as the master?
What happens if the master just offs and dies with no prior notice?
What is the proper way of de-registering instances from the cluster? Do I just shut them down?

Thanks!
Yaron.

Don't run with a single master in the cluster. If you had set up all nodes in your initial cluster to be master eligible with minimum_master_nodes set to 2 (majority of the 3 nodes) you would have been able to lose the decommissioned node and the remaining nodes would then have been able to elect a new master. This would have made the process of migrating one of the nodes to new hardware easier as you could just have shut it down and then started up a new one.

1 Like

Thanks for the information. I understood that this is the best practice when I started learning how to use elasticsearch (which was not that long ago). Sadly, the cluster is already up and running so I need to make all the changes on a live cluster. What would be the proper way to give another instance the master role?

Also, I've tried to find out how to get the minimum master nodes value of my cluster, and which nodes are master eligible but couldn't find how.
I've tried running curl -XGET localhost:9200/_cluster/settings but since I'm guessing since no changes were made to the cluster it shows no output.
Any help?

You can see which nodes are master eligible through the _cat/nodes API. Having dedicated master nodes is best practice, but not necessarily practical for smaller clusters, where it often makes sense to have nodes with the default configuration (master and data) in order to get to the recommended 3 master eligible nodes.

1 Like

Thank you, since I saw that all of my nodes are eligible to be masters (through your valuable help) and since I read (here)that when a master is dropped another node will automatically take it's place (given that they are eligible to be masters) I'm going to go ahead and shut down my current master, hoping for the best.
Appreciate the help.