How to avoid auto electing the data & ingest node as an active master node

I am creating 3 Master , 2 data, 1 ingest node in AKS Elasticsearch cluster and using Elasticsearch Version 7.9.1.

Successfully I have created the cluster but having problem with the Master electing process.

Problem : if I delete the active Master Node. Then, It automatically elect the dedicated Data node as an active Master node. Even sometimes its electing the ingest node as an active Master node.

I thought it could be the reason of "discovery.seed_hosts". So, I have removed the

- name: discovery.seed_hosts
 value: "elasticsearch-discovery"

and added

- name: discovery.seed_hosts
 value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"

in this case it working fine for Master node creation. but when I execute the Data Node Yaml it's throwing error :

"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2020-10-08T18:49:54,640Z", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "docker-cluster", "node.name": "elasticsearch-data-0", "message": "failed to resolve host [elasticsearch-master-0]",
"stacktrace": ["java.net.UnknownHostException: elasticsearch-master-0",
"at java.net.InetAddress$CachedAddresses.get(InetAddress.java:800) ~[?:?]",
"at java.net.InetAddress.getAllByName0(InetAddress.java:1495) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1354) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1288) ~[?:?]",
"at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:548) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:490) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:855) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:651) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }

So, I am doubting on my configuration.

elasticsearch-discovery

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-discovery
  namespace: poc-elasticsearch
  labels:
    app: elasticsearch
    role: master
spec:
  selector:
    app: elasticsearch
    role: master
  ports:
  - name: transport
    port: 9300
    protocol: TCP

Master Node Yaml :

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-master
  namespace: poc-elasticsearch
  labels:
    app: elasticsearch
    role: master
spec:
  serviceName: elasticsearch-discovery
  selector:
    matchLabels:
     app: elasticsearch
  replicas: 3
  template:
    metadata:
      labels:
        app: elasticsearch
        role: master
    spec:
      terminationGracePeriodSeconds: 30
      # Use the stork scheduler to enable more efficient placement of the pods
      #schedulerName: stork
      initContainers:
      - name: increase-the-vm-max-map-count
        image: busybox
        #imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-poc-master-pod
        image: XXXXXXXX/elasticsearch-oss:7.9.1-amd64
        #imagePullPolicy: Always
        env:
        - name: network.host
          value: "0.0.0.0"
        - name: discovery.seed_hosts
          value: "elasticsearch-discovery"
        - name: cluster.initial_master_nodes
          value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: "CLUSTER_NAME"
          value: "XXXXXXXXX"
        - name: "NUMBER_OF_MASTERS"
          value: "3"
        - name: NODE_MASTER
          value: "true"
        - name: NODE_INGEST
          value: "false"
        - name: NODE_DATA
          value: "false"
        - name: HTTP_ENABLE
          value: "false"
        resources:
          limits:
            cpu: 2000m
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 2Gi
        ports:
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-master
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-master
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: azurefile
      resources:
        requests:
          storage: 5Gi

Data Node Service

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-data
  namespace: poc-elasticsearch
  labels:
    app: elasticsearch
    role: data
spec:
  ports:
  - port: 9300
    name: transport
  clusterIP: None
  selector:
    app: elasticsearch
    role: data

Date Node Yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-data
  namespace: poc-elasticsearch
  labels:
    app: elasticsearch
    role: data
spec:
  serviceName: elasticsearch-data
  selector:
    matchLabels:
     app: elasticsearch
  replicas: 2
  template:
    metadata:
      labels:
        app: elasticsearch
        role: data
    spec:
      terminationGracePeriodSeconds: 30
      # Use the stork scheduler to enable more efficient placement of the pods
      #schedulerName: stork
      initContainers:
      - name: increase-the-vm-max-map-count
        image: busybox
        #imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-poc-data-pod
        image: XXXXXXXXXXXXXXXXXXX/elasticsearch-oss:7.9.1-amd64
        #imagePullPolicy: Always
        env:
        - name: DISCOVERY_SERVICE
          value: elasticsearch-discovery
        - name: discovery.seed_hosts
          value: "elasticsearch-discovery"          
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: "CLUSTER_NAME"
          value: "docker-cluster"
        - name: NODE_MASTER
          value: "false"
        - name: NODE_INGEST
          value: "false"
        - name: NODE_DATA
          value: "true"
        - name: HTTP_ENABLE
          value: "true"
        resources:
          limits:
            cpu: 2000m
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 1Gi
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-data
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: azurefile
      resources:
        requests:
          storage: 5Gi

Ingest Node Yaml

kind: StatefulSet
metadata:
  name: elasticsearch-ingest
  namespace: poc-elasticsearch
  labels:
    app: elasticsearch
    role: ingest
spec:
  serviceName: elasticsearch-ingest
  selector:
    matchLabels:
     app: elasticsearch
  replicas: 1
  template:
    metadata:
      labels:
        app: elasticsearch
        role: ingest
    spec:
      terminationGracePeriodSeconds: 30
      # Use the stork scheduler to enable more efficient placement of the pods
      #schedulerName: stork
      initContainers:
      - name: increase-the-vm-max-map-count
        image: busybox
        #imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-poc-ingest-pod
        image: XXXXXXXXXXXXXXXXXXXXXXX/elasticsearch-oss:7.9.1-amd64
        #imagePullPolicy: Always
        env: 
        - name: network.host
          value: "0.0.0.0"
        - name: DISCOVERY_SERVICE
          value: elasticsearch-discovery  
        - name: discovery.seed_hosts
          value: "elasticsearch-discovery"           
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: "CLUSTER_NAME"
          value: "docker-cluster"
        - name: NODE_MASTER
          value: "false"
        - name: NODE_INGEST
          value: "true"
        - name: NODE_DATA
          value: "flase"
        - name: HTTP_ENABLE
          value: "false"
        resources:
          limits:
            cpu: 2000m
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 1Gi
        ports:
        - containerPort: 9300
          name: transport
          protocol: TCP

And I saw some strange thing that all of my nodes are showing "dimr" . I am confused whether those are created correctly or wrong. I am expecting 3 Master, 2 Data , 1 ingest Node.

I suggest you look into using ECK to simplify creation and management of your Elasticsearch cluster.