Discovery-ec2 does not attempt to connect to nodes [RESOLVED]

Hi all,

I've been fighting with this issue for quite some time, hopefully someone here can shed some light on it.

I have a 2 node test cluster of ES 6.

{
  "name" : "ip-172-18-5-140",
  "cluster_name" : "elasticstack",
  "cluster_uuid" : "SwS0K-RlTdS1AWPCKBg6aw",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Config

cluster:
  name: elasticstack
  routing:
    allocation:
      awareness:
        attributes: aws_availability_zone

node:
  name: ip-172-18-5-140
  max_local_storage_nodes: 1
  data: true
  master: true

path:
  data: /var/lib/elasticsearch
  logs: /var/log/elasticsearch

network:
  host: _ec2:privateIpv4_


transport:
  tcp:
   port: 9300
discovery.ec2.protocol: http

discovery:
  zen:
    minimum_master_nodes: 1
    hosts_provider: ec2
  ec2:
    host_type: private_ip
#    tag:
#      elk_cluster: elasticstack

action:
  auto_create_index: true
  destructive_requires_name: true

cloud:
  node:
    auto_attributes: true

LOG

[2018-05-29T05:49:15,467][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/7622] [Main.cc@128] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV
[2018-05-29T05:49:16,899][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-05-29T05:49:17,604][INFO ][o.e.d.DiscoveryModule    ] [ip-172-18-5-140] using discovery type [zen]
[2018-05-29T05:49:18,537][INFO ][o.e.n.Node               ] [ip-172-18-5-140] initialized
[2018-05-29T05:49:18,538][INFO ][o.e.n.Node               ] [ip-172-18-5-140] starting ...
[2018-05-29T05:49:18,743][INFO ][o.e.t.TransportService   ] [ip-172-18-5-140] publish_address {10.76.116.123:9300}, bound_addresses {10.76.116.123:9300}
[2018-05-29T05:49:18,774][INFO ][o.e.b.BootstrapChecks    ] [ip-172-18-5-140] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-05-29T05:49:24,171][INFO ][o.e.c.s.MasterService    ] [ip-172-18-5-140] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {ip-172-18-5-140}{wfYblEg-T8WMVYpEZ_mNdw}{azHmUa-RQvOQGVOhYZXzOQ}{10.76.116.123}{10.76.116.123:9300}{aws_availability_zone=us-east-2b, ml.machine_memory=16340688896, ml.max_open_jobs=20, ml.enabled=true}
[2018-05-29T05:49:24,176][INFO ][o.e.c.s.ClusterApplierService] [ip-172-18-5-140] new_master {ip-172-18-5-140}{wfYblEg-T8WMVYpEZ_mNdw}{azHmUa-RQvOQGVOhYZXzOQ}{10.76.116.123}{10.76.116.123:9300}{aws_availability_zone=us-east-2b, ml.machine_memory=16340688896, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {ip-172-18-5-140}{wfYblEg-T8WMVYpEZ_mNdw}{azHmUa-RQvOQGVOhYZXzOQ}{10.76.116.123}{10.76.116.123:9300}{aws_availability_zone=us-east-2b, ml.machine_memory=16340688896, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-05-29T05:49:24,236][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [ip-172-18-5-140] publish_address {10.76.116.123:9200}, bound_addresses {10.76.116.123:9200}
[2018-05-29T05:49:24,237][INFO ][o.e.n.Node               ] [ip-172-18-5-140] started
[2018-05-29T05:49:24,930][INFO ][o.e.l.LicenseService     ] [ip-172-18-5-140] license [4787ce21-1db0-4d65-bb9a-94e83f67f09e] mode [trial] - valid
[2018-05-29T05:49:24,942][INFO ][o.e.g.GatewayService     ] [ip-172-18-5-140] recovered [8] indices into cluster_state
[2018-05-29T05:49:25,598][INFO ][o.e.c.r.a.AllocationService] [ip-172-18-5-140] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-alerts-6][0], [.monitoring-es-6-2018.05.29][0], [.watcher-history-7-2018.05.29][0]] ...]).

If you have 2 master eligible nodes in the cluster ‘minimum_master_nodes’ should be set to 2, not 1.

Wow, wow. Thank you for catching that! All good now!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.