I upgrades a cluster from version 7.17.5 to 8.3.2 and I'm having issues with master discovery. The node that is set to be the master continues to log this:
[2022-07-12T17:39:00,115][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es1] master not discovered or elected yet, an election requires at least 2 nodes with ids from [uai1YoJwRlCjrOVKrLQKNA, FCYGfJhwQ2iTy7PeozEVeg, 0b_QLfjpTkC9SVVjj2b0vg], have only discovered non-quorum [{es1}{FCYGfJhwQ2iTy7PeozEVeg}{gnuTc_d_QdOyIPt8cbY5nw}{es1}{192.168.1.86}{192.168.1.86:9300}{dim}, {es3}{nboXG_DmQAyXsZXbbRLGxA}{fKzT_np6RqWvkM39wGV8SQ}{es3}{192.168.1.87}{192.168.1.87:9300}{dim}]; discovery will continue using [127.0.1.1:9300, 192.168.1.87:9300] from hosts providers and [{es1}{FCYGfJhwQ2iTy7PeozEVeg}{gnuTc_d_QdOyIPt8cbY5nw}{es1}{192.168.1.86}{192.168.1.86:9300}{dim}] from last-known cluster state; node term 22, last-accepted version 8657 in term 22
It was my understanding that having this line in the configuration would force a specific node to become master:
cluster.initial_master_nodes : ["es1"]
I understand that hard-coding a master node goes against the core purpose of having an election to determine a master node for redundancy. However, just for the sake of getting the cluster running until I can debug the issue in depth, what configuration settings am I missing that will force the node to become a master for the cluster?
Thanks!