Hi, I am building a 6 nodes cluster, node 1-3 master nodes, and node 4-6 data nodes. I have the following in elasticsearch.yml on each node:
bootstrap.memory_lock: false
cluster.initial_master_nodes:
- awselsdevlap01.est1933.com-esnode01
- awselsdevlap02.est1933.com-esnode02
- awselsdevlap03.est1933.com-esnode03
cluster.name: es_gallo_dev
discovery.zen.minimum_master_nodes: 3
discovery.zen.ping.unicast.hosts: awselsdevlap01.est1933.com, awselsdevlap02.est1933.com,
awselsdevlap03.est1933.com, awselsdevlap04.est1933.com, awselsdevlap05.est1933.com,
awselsdevlap06.est1933.com
http.port: 9200
network.host: site
transport.tcp.port: 9300
xpack.security.authc.realms.ldap.ldap1.bind_dn: uid=s-elasticsearch,ou=people,o=ejgallo.com
xpack.security.authc.realms.ldap.ldap1.group_search.base_dn: ou=groups,o=ejgallo.com
xpack.security.authc.realms.ldap.ldap1.order: 1
xpack.security.authc.realms.ldap.ldap1.url: ldaps://gdsprd01.ejgallo.com:636
xpack.security.authc.realms.ldap.ldap1.user_search.base_dn: ou=people,o=ejgallo.com
xpack.security.enabled: true
node.name: awselsdevlap01.est1933.com-esnode01
#################################### Paths ####################################
Path to directory containing configuration (this file and logging.yml):
path.data: /es_data/data01/awselsdevlap01.est1933.com-esnode01,/es_data/data02/awselsdevlap01.est1933.com-esnode01,/es_data/data03/awselsdevlap01.est1933.com-esnode01,/es_data/data04/awselsdevlap01.est1933.com-esnode01,/es_data/data05/awselsdevlap01.est1933.com-esnode01
path.logs: /es_data/es_logs/awselsdevlap01.est1933.com-esnode01
The following is what the logs look like:
2019-05-23T00:00:35,579][WARN ][o.e.c.c.ClusterFormationFailureHelper] [awselsdevlap01.est1933.com-esnode01] master not discovered or elected yet, an election requires at least 2 nodes with ids from [UWgnBPsHQ1aW4xXEZVKyJQ, T_8EuKpaTqWg2oP3TAnAaA, YZ6m2ioDQWqi1cNnOteB6w], have discovered [{awselsdevlap02.est1933.com-esnode02}{T_8EuKpaTqWg2oP3TAnAaA}{-D-VHjdeSUyJdlTauLVuQw}{10.173.148.65}{10.173.148.65:9300}{ml.machine_memory=31980478464, ml.max_open_jobs=20, xpack.installed=true}, {awselsdevlap03.est1933.com-esnode03}{UWgnBPsHQ1aW4xXEZVKyJQ}{-CoUrjn9QlKE-K5SqZ-JYw}{10.173.148.73}{10.173.148.73:9300}{ml.machine_memory=31980478464, ml.max_open_jobs=20, xpack.installed=true}] which is a quorum; discovery will continue using [10.173.148.65:9300, 10.173.148.73:9300, 10.173.148.58:9300, 10.173.148.50:9300, 10.173.148.67:9300] from hosts providers and [{awselsdevlap01.est1933.com-esnode01}{YZ6m2ioDQWqi1cNnOteB6w}{epXUK1dTSKCf0Ca9CphE3A}{10.173.148.143}{10.173.148.143:9300}{ml.machine_memory=31980478464, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 22, last-accepted version 2867 in term 21
Any idea what wrong with my config? I checked other similar postings and ensured the list in cluster.initial_master_nodes match the node.name setting.