How make a master node in a 3 node cluster?

Current 3 node cluster setup

I have setup 3 node cluster

  "cluster_name" : "DEMOCLUSTER",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 2,
  "active_shards" : 4,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0

if i set node.roles to master for one node and restart I get the below errors

- master
[ERROR][o.e.b.Elasticsearch      ] [ip-.ec2.internal] fatal exception while booting Elasticsearch
java.lang.IllegalStateException: node does not have the data role but has shard data: [/var/lib/elasticsearch/indices/0rSKvI5aSH2g4UzMkZanNA/0, /var/lib/elasticsearch/indices/palG910hQ9aZnoXiFL5jQg/0]. Use 'elasticsearch-node repurpose' tool to clean up

If i did elasticsearch-node repurpose, cluster goes bad and also not able access curl cluster health with elastic password.

Now what's the order/how to set one node as master here and have a working 1 master 2 data cluster ?

How Can i further grow the cluster into something like 3 master, 6 data ?

@H_K7 welcome to the community !
The node roles are essentially assigned during configuring them before actually spinning up the cluster. Since you already have data in all nodes, you cannot change a data node to master without moving shards.
One approach could be to first set exclude_ip in your dynamic cluster settings to move shards out of that node and assign them to remaining nodes. Then you can juggle with the nodes roles and make that node as master.
Since you have a new cluster, best would be to start from configuring them and may be use cluster.initial_master_nodes setting to tell cluster nodes which one is master. In addition, set the node.roles for each ES node so they don't assume master role when initial one goes down.

You should always aim to have 3 master eligible nodes in your cluster. If you have only 3 nodes that would therefore mean that all are master eligible. If you then want to transition to a cluster with 3 dedicated master nodes and 6 data nodes you add the data nodes first and then make the master eligible nodes dedicated master nodes one at a time.

@Christian_Dahlqvist, @Ayush_Mathur , Thanks for the response,
Also If I create the very first Node(before even creating the other 2 nodes which I planned for 3node cluster) with node.roles as master, It errors to start up, Below is the config demoCLUSTER ip-FIRSTNODE.ec2.internal /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
- master
discovery.seed_hosts: ["ip-FIRSTNODE.ec2.internal"] true true
  enabled: true
  keystore.path: certs/http.p12
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["ip-FIRSTNODE.ec2.internal"]

Not sure why it doesn't work that way, But is it also that's why we should'nt create the very first node as master ?

If you have only 3 nodes, let them have all roles and not be dedicated masters.

It would also help if you showed the exact error message from the logs.

Right now these are the only logs I have for the above mentioned single node (which I tried to assign master)

2023-02-03T11:12:34,589][INFO ][o.e.l.LicenseService     ] [ip-FIRSTNODE.ec2.internal] license [2afd032d-5e75-4771-8836-abcce96b1160] mode [basic] - valid
[2023-02-03T11:12:34,590][INFO ][o.e.x.s.a.Realms         ] [ip-FIRSTNODE.ec2.internal] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2023-02-03T11:13:02,903][WARN ][o.e.c.c.Coordinator      ] [ip-FIRSTNODE.ec2.internal] This node is a fully-formed single-node cluster with cluster UUID [CWeTC4bJR_e7C6sJDbQRtg], but it is configured as if to discover other nodes and form a multi-node cluster via the [discovery.seed_hosts=[ip-FIRSTNODE.ec2.internal]] setting. Fully-formed clusters do not attempt to discover other nodes, and nodes with different cluster UUIDs cannot belong to the same cluster. The cluster UUID persists across restarts and can only be changed by deleting the contents of the node's data path(s). Remove the discovery configuration to suppress this message.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.