Master not discovered or elected yet, an election requires at least 2 nodes with ids - ELK Stack - Docker Swarm

Hi, I am trying to setup a elastic cluster with 3 masters, 1 kibana node and 1 data node. This is a common topic but would like to understand what I am doing wrong.
Below is portainer error log on master1

WARN master not discovered or elected yet, an election requires 2 nodes with ids [yXZYR85iSWCVPDT2LCkC9g, k2QEE-mkTn62BURNqWPEJg], have discovered possible quorum [{master1}{k2QEE-mkTn62BURNqWPEJg}{yNc1jnlDR1OBkyham5OZMA}{10.0.0.45}{10.0.0.45:9300}{cdfhilmrstw}, {master2}{yXZYR85iSWCVPDT2LCkC9g}{K24Y9ebYTjy4hky6uT5w9w}{10.0.6.130}{10.0.6.130:9300}{cdfhilmrstw}, {master3}{yE_p2cu8RvqPcVuXHuuBVA}{xz-cyL6oScqEHplp_bss-A}{10.0.6.131}{10.0.6.131:9300}{cdfhilmrstw}]; discovery will continue using [10.0.6.5:9300, 10.0.6.8:9300, 10.0.6.11:9300] from hosts providers and [{master1}{k2QEE-mkTn62BURNqWPEJg}{yNc1jnlDR1OBkyham5OZMA}{10.0.0.45}{10.0.0.45:9300}{cdfhilmrstw}] from last-known cluster state; node term 23, last-accepted version 0 in term 0 | type=server timestamp=2023-02-15T06:33:37,743Z component=o.e.c.c.ClusterFormationFailureHelper cluster.name=oss-elk-ncw node.name=master1
WARN address [10.0.6.5:9300], node [null], requesting [false] connection failed: [master1][10.0.0.45:9300] local node found | type=server timestamp=2023-02-15T06:33:38,068Z component=o.e.d.PeerFinder cluster.name=oss-elk-ncw node.name=master1
WARN address [10.0.6.5:9300], node [null], requesting [false] connection failed: [master1][10.0.0.45:9300] local node found | type=server timestamp=2023-02-15T06:33:39,069Z component=o.e.d.PeerFinder cluster.name=oss-elk-ncw node.name=master1

here are logs from other two masters below

master2

INFO added {{data-ncw-elk-wn-13}{pH33mLWaR4Cz0DlGuXXIkQ}{RjGaI1jNTZaTWk-pRCrshQ}{10.0.6.132}{10.0.6.132:9300}{cdfhilmrstw}}, term: 23, version: 78, reason: ApplyCommitRequest{term=23, version=78, sourceNode={master3}{yE_p2cu8RvqPcVuXHuuBVA}{xz-cyL6oScqEHplp_bss-A}{10.0.6.131}{10.0.6.131:9300}{cdfhilmrstw}{ml.machine_memory=269940355072, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=17179869184, transform.node=true}} | type=server timestamp=2023-02-15T06:19:20,185Z component=o.e.c.s.ClusterApplierService cluster.name=oss-elk-ncw node.name=master2 cluster.uuid=9mFNb2_CR_mI55VCB0_CcA node.id=yXZYR85iSWCVPDT2LCkC9g
{"type": "server", "timestamp": "2023-02-15T06:19:25,399Z", "level": "WARN", "component": "o.e.d.HandshakingTransportAddressConnector", "cluster.name": "oss-elk-ncw", "node.name": "master2", "message": "[connectToRemoteMasterNode[10.0.6.5:9300]] completed handshake with [{master1}{k2QEE-mkTn62BURNqWPEJg}{yNc1jnlDR1OBkyham5OZMA}{10.0.0.45}{10.0.0.45:9300}{cdfhilmrstw}{ml.machine_memory=269940350976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=17179869184, transform.node=true}] but followup connection failed", "cluster.uuid": "9mFNb2_CR_mI55VCB0_CcA", "node.id": "yXZYR85iSWCVPDT2LCkC9g" , 
"stacktrace": ["org.elasticsearch.transport.ConnectTransportException: [master1][10.0.0.45:9300] connect_exception",

master3

INFO added {{data-ncw-elk-wn-13}{pH33mLWaR4Cz0DlGuXXIkQ}{RjGaI1jNTZaTWk-pRCrshQ}{10.0.6.132}{10.0.6.132:9300}{cdfhilmrstw}}, term: 23, version: 78, reason: Publication{term=23, version=78} | type=server timestamp=2023-02-15T06:19:20,546Z component=o.e.c.s.ClusterApplierService cluster.name=oss-elk-ncw node.name=master3 cluster.uuid=9mFNb2_CR_mI55VCB0_CcA node.id=yE_p2cu8RvqPcVuXHuuBVA
{"type": "server", "timestamp": "2023-02-15T06:19:25,465Z", "level": "WARN", "component": "o.e.d.HandshakingTransportAddressConnector", "cluster.name": "oss-elk-ncw", "node.name": "master3", "message": "[connectToRemoteMasterNode[10.0.6.5:9300]] completed handshake with [{master1}{k2QEE-mkTn62BURNqWPEJg}{yNc1jnlDR1OBkyham5OZMA}{10.0.0.45}{10.0.0.45:9300}{cdfhilmrstw}{ml.machine_memory=269940350976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=17179869184, transform.node=true}] but followup connection failed", "cluster.uuid": "9mFNb2_CR_mI55VCB0_CcA", "node.id": "yE_p2cu8RvqPcVuXHuuBVA" , 
"stacktrace": ["org.elasticsearch.transport.ConnectTransportException: [master1][10.0.0.45:9300] connect_exception",

Below is my docker-compose.yml file

version: '3.7'
services:
  master1:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
    environment:
      - node.name=master1
      - discovery.seed_hosts=master1,master2,master3
      - cluster.name=oss-elk-ncw
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms16g -Xmx16g"
      - xpack.monitoring.collection.enabled=true
      - xpack.security.audit.enabled=true
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - /data/elastic:/usr/share/elasticsearch/data
    networks:
      - oss-elk-network
    deploy:
      mode: "replicated"
      replicas: 1
      placement:
        constraints: [ node.labels.role == es-master-node ]
  master2:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
    environment:
      - node.name=master2
      - discovery.seed_hosts=master1,master2,master3
      - cluster.name=oss-elk-ncw
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms16g -Xmx16g"
      - xpack.monitoring.collection.enabled=true
      - xpack.security.audit.enabled=true
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/elastic:/usr/share/elasticsearch/data
    networks:
      - oss-elk-network
    deploy:
      mode: "replicated"
      replicas: 1
      placement:
        constraints: [ node.labels.role == es-master-node ]
  master3:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
    environment:
      - node.name=master3
      - discovery.seed_hosts=master1,master2,master3
      - cluster.name=oss-elk-ncw
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms16g -Xmx16g"
      - xpack.monitoring.collection.enabled=true
      - xpack.security.audit.enabled=true
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/elastic:/usr/share/elasticsearch/data
    networks:
      - oss-elk-network
    deploy:
      mode: "replicated"
      replicas: 1
      placement:
        constraints: [ node.labels.role == es-master-node ]
  data:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
    environment:
      - node.name=data-{{.Node.Hostname}}
      - discovery.seed_hosts=master1,master2,master3
      - cluster.name=oss-elk-ncw
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms32g -Xmx32g"
      - xpack.monitoring.collection.enabled=true
      - xpack.security.audit.enabled=true

    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/elastic:/usr/share/elasticsearch/data
    networks:
      - oss-elk-network
    deploy:
      replicas: 1
      placement:
        constraints: [ node.labels.role == es-data-node ]
  kibana:
    image: docker.elastic.co/kibana/kibana:7.17.8
    environment:
      - ELASTICSEARCH_HOSTS="http://master1:9200"
      - monitoring.kibana.collection.enabled=false
    networks:
      - oss-elk-network
    ports:
      - 5601:5601
    deploy:
      replicas: 1
      placement:
        constraints: [ node.labels.role == es-kibana-node ]

networks:
  oss-elk-network:

Can you please advise what I am doing wrong ? I am new to using Elasticsearch....

Hello @vdcharter , welcome to the community !
Is this the first time your cluster has been spun up ? If yes, then probably you are missing another property cluster.initial_master_nodes in your configuration.
If your issue is arising during bootstrapping, please read: Bootstrapping a cluster | Elasticsearch Guide [8.6] | Elastic

Hi, Its not the first time. I initially did set "cluster.initial_master_nodes" to deploy stack then I removed it to update running stack since I dont need that configuration once cluster is up ?
Also do I need to set all master nodes in the "cluster.initial_master_nodes"?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.