Node seems not to be discovered on EC2 network

Hi all. Now I’m trying to form a two-nodes cluster on AWS EC2. I’m using Elasticsearch Ver. 7.3. “discovery-ec2” is installed in both two nodes for a cluster.
The followings are elasticsearch.yml on each node-1 and node-2.

node-1

    # -- Cluster --
    cluster.name: Fess
    # -- Node --
    node.name: node-1
    # -- Path --
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    # -- Network --
    network.host: _ec2_
    http.port: 9200
    # -- Discovery --
    cluster.initial_master_nodes:
      - node-1
      - node-2
    discovery.seed_providers: ec2
    discovery.ec2.availability_zones:
      - us-west-1b
    discovery.ec2.tag.ES_CLUSTER: "YES"

node-2

    # -- Cluster --
    cluster.name: Fess
    # -- Node --
    node.name: node-2
    # -- Path --
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    # -- Network --
    network.host: _ec2_
    http.port: 9200
    # -- Discovery --
    cluster.initial_master_nodes:
      - node-1
      - node-2
    discovery.seed_providers: ec2
    discovery.ec2.availability_zones:
      - us-west-1b
    discovery.ec2.tag.ES_CLUSTER: "YES"

These two nodes are in same availability zone of EC2.


So These nodes are assigned tag “ES_CLUSTER” (Value: YES).

I created a new user for managing ES cluster in IAM console. This new user had been applied the following permission policy.

    {
        "Statement": [
            {
                "Action": [
                    "ec2:DescribeInstances"
                ],
                "Effect": "Allow",
                "Resource": [
                    "*"
                ]
            }
        ],
        "Version": "2012-10-17"
    }

And I add security credential of the new user by the below commands ($ indicates the root directory of ES).

$bin/elasticsearch-keystore create
$bin/elasticsearch-keystore add discovery.ec2.access_key
$bin/elasticsearch-keystore add discovery.ec2.secret_key

I started daemons of elasticsearch in the order of node1, node2. discovery-ec2 plugin seems not to be used for forming cluster. Is there wrong configuration of lacked one?

node-1

[2020-06-08T14:00:54,483][INFO ][o.e.p.PluginsService     ] [node-1] loaded plugin [discovery-ec2]
[2020-06-08T14:00:54,483][INFO ][o.e.p.PluginsService     ] [node-1] loaded plugin [minhash]
[2020-06-08T14:00:58,314][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-06-08T14:00:59,062][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1] [controller/1585] [Main.cc@110] controller (64 bit): Version 7.3.0 (Build ff2f774f78ce63) Copyright (c) 2019 Elasticsearch BV
[2020-06-08T14:00:59,488][DEBUG][o.e.a.ActionModule       ] [node-1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2020-06-08T14:00:59,854][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen] and seed hosts providers [settings, ec2]
[2020-06-08T14:01:00,659][INFO ][o.e.n.Node               ] [node-1] initialized
[2020-06-08T14:01:00,660][INFO ][o.e.n.Node               ] [node-1] starting ...
[2020-06-08T14:01:00,800][INFO ][o.e.t.TransportService   ] [node-1] publish_address {172.31.18.180:9300}, bound_addresses {172.31.18.180:9300}
[2020-06-08T14:01:00,808][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-08T14:01:00,840][INFO ][o.e.c.c.Coordinator      ] [node-1] cluster UUID [oHiS4nwJTB-ZJcX0wurrjw]
[2020-06-08T14:01:00,847][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [node-1] no known master node, scheduling a retry
[2020-06-08T14:01:00,984][INFO ][o.e.c.s.MasterService    ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{SFaapBMpQjOlZBBZl4D1qg}{-EIPhzQnR8yqIdgA72doDA}{172.31.18.180}{172.31.18.180:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 9, version: 53, reason: master node changed {previous [], current [{node-1}{SFaapBMpQjOlZBBZl4D1qg}{-EIPhzQnR8yqIdgA72doDA}{172.31.18.180}{172.31.18.180:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-06-08T14:01:01,111][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{SFaapBMpQjOlZBBZl4D1qg}{-EIPhzQnR8yqIdgA72doDA}{172.31.18.180}{172.31.18.180:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]}, term: 9, version: 53, reason: Publication{term=9, version=53}
[2020-06-08T14:01:01,237][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {172.31.18.180:9200}, bound_addresses {172.31.18.180:9200}
[2020-06-08T14:01:01,237][INFO ][o.e.n.Node               ] [node-1] started
[2020-06-08T14:01:01,841][INFO ][o.e.l.LicenseService     ] [node-1] license [f9a1e878-905f-45a3-8989-8e12bdbcee71] mode [basic] - valid
[2020-06-08T14:01:01,842][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is disabled
[2020-06-08T14:01:01,852][INFO ][o.e.g.GatewayService     ] [node-1] recovered [1] indices into cluster_state
[2020-06-08T14:01:02,179][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.configsync][0]] ...]).
[2020-06-08T14:01:02,202][INFO ][o.c.e.c.s.ConfigSyncService] [node-1] ConfigFileUpdater is started at 1m intervals.

node-2

[2020-06-08T14:03:43,708][INFO ][o.e.p.PluginsService     ] [node-2] loaded plugin [discovery-ec2]
[2020-06-08T14:03:47,981][INFO ][o.e.x.s.a.s.FileRolesStore] [node-2] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-06-08T14:03:48,781][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-2] [controller/1328] [Main.cc@110] controller (64 bit): Version 7.3.0 (Build ff2f774f78ce63) Copyright (c) 2019 Elasticsearch BV
[2020-06-08T14:03:49,183][DEBUG][o.e.a.ActionModule       ] [node-2] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2020-06-08T14:03:49,427][INFO ][o.e.d.DiscoveryModule    ] [node-2] using discovery type [zen] and seed hosts providers [settings, ec2]
[2020-06-08T14:03:50,194][INFO ][o.e.n.Node               ] [node-2] initialized
[2020-06-08T14:03:50,194][INFO ][o.e.n.Node               ] [node-2] starting ...
[2020-06-08T14:03:50,323][INFO ][o.e.t.TransportService   ] [node-2] publish_address {172.31.27.164:9300}, bound_addresses {172.31.27.164:9300}
[2020-06-08T14:03:50,331][INFO ][o.e.b.BootstrapChecks    ] [node-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-08T14:04:00,351][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-08T14:04:10,353][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-08T14:04:20,355][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-08T14:04:20,367][WARN ][o.e.n.Node               ] [node-2] timed out while waiting for initial discovery state - timeout: 30s
[2020-06-08T14:04:20,376][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {172.31.27.164:9200}, bound_addresses {172.31.27.164:9200}
[2020-06-08T14:04:20,377][INFO ][o.e.n.Node               ] [node-2] started
[2020-06-08T14:04:30,357][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-08T14:04:40,358][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{n4AT1IWMT--j2o9EToIjAw}{ro_IHM17R5uY6XZuR4FpOQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305442816, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-08T14:04:47,701][INFO ][o.e.n.Node               ] [node-2] stopping ...
[2020-06-08T14:04:47,707][INFO ][o.e.x.w.WatcherService   ] [node-2] stopping watch service, reason [shutdown initiated]
[2020-06-08T14:04:48,153][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-2] [controller/1328] [Main.cc@150] Ml controller exiting
[2020-06-08T14:04:48,154][INFO ][o.e.x.m.p.NativeController] [node-2] Native controller process has stopped - no new native processes can be started
[2020-06-08T14:04:48,162][INFO ][o.e.n.Node               ] [node-2] stopped
[2020-06-08T14:04:48,163][INFO ][o.e.n.Node               ] [node-2] closing ...
[2020-06-08T14:04:48,173][INFO ][o.e.n.Node               ] [node-2] closed

It seems like you are missing some more discovery-plugin specific settings in your Elasticsearch.yml on both the nodes. For e.g. the discovery-plugin expects you to specify the endpoint attribute i.e. discovery.ec2.endpoint which defaults to ec2.us-east-1.amazonaws.com but since your instances seem to be in us-west-1 region you will have to explicitly set those in ymls on both the nodes.

See more settings on this page with the ec2 discovery plugin - Using the EC2 discovery plugin | Elasticsearch Plugins and Integrations [8.11] | Elastic

Thank you for responding. I added the following line to elasticsearch.yml of both nodes.

discovery.ec2.endpoint: ec2.us-west-1.amazonaws.com

However, node-2 could node find node-1 (expected master node).

[2020-06-09T12:52:04,595][INFO ][o.e.p.PluginsService     ] [node-2] loaded plugin [discovery-ec2]
[2020-06-09T12:52:08,284][INFO ][o.e.x.s.a.s.FileRolesStore] [node-2] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-06-09T12:52:09,133][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-2] [controller/1629] [Main.cc@110] controller (64 bit): Version 7.3.0 (Build ff2f774f78ce63) Copyright (c) 2019 Elasticsearch BV
[2020-06-09T12:52:09,522][DEBUG][o.e.a.ActionModule       ] [node-2] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2020-06-09T12:52:09,775][INFO ][o.e.d.DiscoveryModule    ] [node-2] using discovery type [zen] and seed hosts providers [settings, ec2]
[2020-06-09T12:52:10,552][INFO ][o.e.n.Node               ] [node-2] initialized
[2020-06-09T12:52:10,553][INFO ][o.e.n.Node               ] [node-2] starting ...
[2020-06-09T12:52:10,680][INFO ][o.e.t.TransportService   ] [node-2] publish_address {172.31.27.164:9300}, bound_addresses {172.31.27.164:9300}
[2020-06-09T12:52:10,686][INFO ][o.e.b.BootstrapChecks    ] [node-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-09T12:52:20,701][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-09T12:52:30,703][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-09T12:52:40,705][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-06-09T12:52:40,726][WARN ][o.e.n.Node               ] [node-2] timed out while waiting for initial discovery state - timeout: 30s
[2020-06-09T12:52:40,738][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {172.31.27.164:9200}, bound_addresses {172.31.27.164:9200}
[2020-06-09T12:52:40,739][INFO ][o.e.n.Node               ] [node-2] started
[2020-06-09T12:52:50,707][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-2] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{node-2}{HtnpnNcSQ66znAHgJWB43A}{9Fikl2jcTDmVE8zKEdLUbQ}{172.31.27.164}{172.31.27.164:9300}{dim}{ml.machine_memory=16305434624, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

Finally, I had solved this problem. There are the following two issues in my case.

  • Lack of "discovery.ec2.endpoint"
  • Reservation word "YES"

Lack of "discovery.ec2.endpoint
This was noted by @Rahul_Kumar4 . Thank you :grinning:
I checked the official document but I cannot find that this is necessary.

Reservation word "YES"
This issue was my fault. "YES" is reserved word as boolean value in YAML file. "YES" will be treated as true after parsing yaml file.

discovery.ec2.tag.ES_CLUSTER: "YES"

I modified the above line with the following line for node-1 and node2.

discovery.ec2.tag.ES_CLUSTER: 1

So I changed the value of "ES_CLUSTER" tag to "1" in EC2 console. Then Cluster formed correctly.

The final elasticsearch.yml files are in below.
node-1

# -- Cluster --
cluster.name: Fess
# -- Node --
node.name: node-1
# -- Path --
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# -- Network --
network.host: _ec2_
http.port: 9200
# -- Discovery --
cluster.initial_master_nodes:
  - node-1
  - node-2
discovery.seed_providers: ec2
discovery.ec2.endpoint: ec2.us-west-1.amazonaws.com
discovery.ec2.tag.ES_CLUSTER: 1

node2

# -- Cluster --
cluster.name: Fess
# -- Node --
node.name: node-1
# -- Path --
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# -- Network --
network.host: _ec2_
http.port: 9200
# -- Discovery --
cluster.initial_master_nodes:
  - node-1
  - node-2
discovery.seed_providers: ec2
discovery.ec2.tag.ES_CLUSTER: 1
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.