Not enough master nodes discovered during pingin on ES6.0.1 with docker

Hello all,

im trying to setup elasticsearch cluster (elasticsearch-6.0.1-1.noarch) using docker and ec2-discovery plugin on ec2 instances(not one instance).

my use case is creating the dockerfile with the needed packages installed then mount data and configuration using docker run command,everything seems to be good except that the nodes are unable to be discovered by each other though i can see that the ec2-plugin is able to give me the list of both data and master nodes

my configuration is the below

datanodes config

cluster.name: dev-cluster
node.name: ${HOSTNAME}
node.data: true
node.master: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.zen.minimum_master_nodes: 2
discovery.ec2.availability_zones: eu-west-1a,eu-west-1b,eu-west-1c
discovery.ec2.groups: sg-xxxx,sg-xxxx
cloud.node.auto_attributes: true
discovery.ec2.host_type: private_ip
discovery.ec2.tag.Name: Elasticsearch6-docker
cluster.routing.allocation.awareness.attributes: aws_availability_zone
discovery.ec2.endpoint: ec2.eu-west-1.amazonaws.com

master nodes

cluster.name: dev-cluster
node.name: ${HOSTNAME}
node.data: false
node.master: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.zen.minimum_master_nodes: 2
discovery.ec2.availability_zones: eu-west-1a,eu-west-1b,eu-west-1c
discovery.ec2.groups: sg-xxxx,sg-xxxx
cloud.node.auto_attributes: true
discovery.ec2.host_type: private_ip
discovery.ec2.tag.Name: Elasticsearch6-docker
cluster.routing.allocation.awareness.attributes: aws_availability_zone
discovery.ec2.endpoint: ec2.eu-west-1.amazonaws.com

when i start elasticsearch in debug mode i got the below result

[2018-02-05T14:33:15,773][DEBUG][o.e.d.z.ZenDiscovery     ] [818b858792b5] filtered ping responses: (ignore_non_masters [false])
        --> ping_response{node [{818b858792b5}{nCbUv6o0QUyKU8tFblOKGQ}{uweZKRMDTAiRNkqmhT34cQ}{10.10.227.26}{10.10.227.26:9300}{aws_availability_zone=eu-west-1c}], id[117], master [null],cluster_state_version [-1], cluster_name[dev-cluster]}
[2018-02-05T14:33:15,773][WARN ][o.e.d.z.ZenDiscovery     ] [818b858792b5] not enough master nodes discovered during pinging (found [[Candidate{node={818b858792b5}{nCbUv6o0QUyKU8tFblOKGQ}{uweZKRMDTAiRNkqmhT34cQ}{10.10.227.26}{10.10.227.26:9300}{aws_availability_zone=eu-west-1c}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-02-05T14:33:17,894][WARN ][o.e.n.Node               ] [818b858792b5] timed out while waiting for initial discovery state - timeout: 30s
[2018-02-05T14:33:17,900][DEBUG][o.e.h.n.Netty4HttpServerTransport] [818b858792b5] Bound http to address {[::]:9200}
[2018-02-05T14:33:17,900][DEBUG][o.e.d.e.Ec2NameResolver  ] [818b858792b5] obtaining ec2 hostname from ec2 meta-data url http://169.254.169.254/latest/meta-data/local-ipv4
[2018-02-05T14:33:17,905][INFO ][o.e.h.n.Netty4HttpServerTransport] [818b858792b5] publish_address {10.10.227.26:9200}, bound_addresses {[::]:9200}
[2018-02-05T14:33:17,905][INFO ][o.e.n.Node               ] [818b858792b5] started
[2018-02-05T14:33:18,774][DEBUG][o.e.d.z.ZenDiscovery     ] [818b858792b5] filtered ping responses: (ignore_non_masters [false])
        --> ping_response{node [{818b858792b5}{nCbUv6o0QUyKU8tFblOKGQ}{uweZKRMDTAiRNkqmhT34cQ}{10.10.227.26}{10.10.227.26:9300}{aws_availability_zone=eu-west-1c}], id[130], master [null],cluster_state_version [-1], cluster_name[dev-cluster]}
[2018-02-05T14:33:18,774][WARN ][o.e.d.z.ZenDiscovery     ] [818b858792b5] not enough master nodes discovered during pinging (found [[Candidate{node={818b858792b5}{nCbUv6o0QUyKU8tFblOKGQ}{uweZKRMDTAiRNkqmhT34cQ}{10.10.227.26}{10.10.227.26:9300}{aws_availability_zone=eu-west-1c}, clusterStateVersion=-1}]], but needed [2]), pinging again

as i see that every node can list all nodes and they are all accessible by each other from any container or the hosts itself

Connection to 10.10.227.177 9200 port [tcp/wap-wsp] succeeded!

anyone has this issue before or can help finding what’s the issue with my configuration?

thank you in advance

Thanks,im able to resolve this issue by exposing both ports 9200 and 9300
before that i was exposing only 9200

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.