Not enough master nodes discovered during pinging

I have 3 aws ec2 instances(sink1,sink2,sink3)in us-west-2a, us-west-2b, us-west-2c respectively, each of the instance has only one elasticsearch service (5.6.2)that I tryna start up to form a 3 nodes-cluster.

Three machines are booted from the same AMI, they have the same elasticsearch config.

Now two of them(sink1,sink2)successfully join the cluster, only one(sink3) can not
[ec2-user@ip-172-31-0-222 ~]$ curl sink3:9200
{
"name" : "sink3_9200",
"cluster_name" : "es_cluster",
"cluster_uuid" : "na",
"version" : {
"number" : "5.6.2",
"build_hash" : "57e20f3",
"build_date" : "2017-09-23T13:16:45.703Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
[ec2-user@ip-172-31-0-222 ~]$ curl sink2:9200
{
"name" : "sink2_9200",
"cluster_name" : "es_cluster",
"cluster_uuid" : "aS5nE-iHQBCF43oTJU933g",
"version" : {
"number" : "5.6.2",
"build_hash" : "57e20f3",
"build_date" : "2017-09-23T13:16:45.703Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
[ec2-user@ip-172-31-0-222 ~]$ curl sink1:9200
{
"name" : "sink1_9200",
"cluster_name" : "es_cluster",
"cluster_uuid" : "aS5nE-iHQBCF43oTJU933g",
"version" : {
"number" : "5.6.2",
"build_hash" : "57e20f3",
"build_date" : "2017-09-23T13:16:45.703Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}

Here's the log of the one couldn't get in:

[2017-09-29T09:46:00,499][WARN ][o.e.d.z.ZenDiscovery ] [sink3_9200] not enough master nodes discovered during pinging (found [[Candidate{node={sink3_9200}{qokAxuFxSj6dGt5-A-YThA}{Ns_kdrpvRVS_JQRKUvWtZw}{172.31.0.222}{172.31.0.222:9300}{aws_availability_zone=us-west-2c, rack=us-west-2c}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2017-09-29T09:46:30,529][WARN ][o.e.d.z.ZenDiscovery ] [sink3_9200] not enough master nodes discovered during pinging (found [[Candidate{node={sink3_9200}{qokAxuFxSj6dGt5-A-YThA}{Ns_kdrpvRVS_JQRKUvWtZw}{172.31.0.222}{172.31.0.222:9300}{aws_availability_zone=us-west-2c, rack=us-west-2c}, clusterStateVersion=-1}]], but needed [2]), pinging again

Here's sink3's config, sink2 and sink1 have the same content except different node.attr.rack, discovery.ec2.tag.Name

config:


cluster.name: es_cluster
node.name: sink3_9200
node.attr.rack: us-west-2c
network.host: ec2
http.port: 9200
node.master: true
node.data: true
plugin.mandatory: discovery-ec2, repository-s3
cloud:
aws:
access_key: XXXXXXXXXXXXXXX
secret_key: XXXXXXXXXXXXXXX
discovery.zen.hosts_provider: ec2
discovery.type: ec2
discovery.ec2.groups: Kafka_Spark_group
discovery.ec2.host_type: private_ip
discovery.ec2.tag.Name: sink3
discovery.ec2.availability_zones: us-west-2a, us-west-2b, us-west-2c
cloud.aws.region: us-west-2
cloud.aws.protocol: http
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
discovery.zen.minimum_master_nodes: 2

Any idea about this?

Solved....pls close this thread

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.