Not able to form 2 node cluster with SSL,version - 7.2.0

Hi All,

Im trying to create 2 node cluster with SSL but 2 nodes are running individually fine but not able form the cluster. Below are the details :

Node-1 Config :
http.port: 9200
node.name: node-1
cluster.name: dev_env
node.data: true
node.master: true
network.host:
network.publish_host:
transport.tcp.port: 9300
transport.publish_port: 9300
discovery.seed_hosts: ["node-1:9300","node-2:9301"]
path.data: /opt/mad/tools/es_data
path.logs: /opt/mad/tools/es_logs
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: certs/node-1.key
xpack.security.http.ssl.certificate: certs/node-1.crt
xpack.security.http.ssl.certificate_authorities: certs/ca/ca.crt
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: certs/node-1.key
xpack.security.transport.ssl.certificate: certs/node-1.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca/ca.crt

Node-2 Config :
http.port: 9201
node.name: node-2
cluster.name: dev_env
node.data: true
node.master: true
network.host:
network.publish_host:
transport.tcp.port: 9301
transport.publish_port: 9301
discovery.seed_hosts: ["node-1:9300","node-2:9301"]
path.data: /opt/mad/tools/es_data
path.logs: /opt/mad/tools/es_logs
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: certs/node-2.key
xpack.security.http.ssl.certificate: certs/node-2.crt
xpack.security.http.ssl.certificate_authorities: certs/ca/ca.crt
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: certs/node-2.key
xpack.security.transport.ssl.certificate: certs/node-2.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca/ca.crt

Note : I tried by generating different certs for 2 nodes and its not working. So i tried by using the same certs for both the nodes and its not working.

Its not throwing any errors also. Please help me on this. Thanks!

  • If this is not throwing any errors, then how do you know that the cluster is not forming ?
  • What are the contents of the elasticsearch logs ?
  • How did you install elasticsearch ?
  • How did you generate the keys and certificates ?

Thanks for the reply @ikakavas.

Two nodes are working fine individually. When i add index into first node the same index is not replicating in the second node. Hence i came to decision that cluster is not formed.

I have generated certs using the link - https://www.elastic.co/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash

I have download from the elastic website and ran by using ./elasticsearch command.

For both the nodes the logs look like below at the end

[2019-08-28T13:05:16,676][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[projectsummary][0], [coi_deals][0], [searchapi][0]] ...]).

Please share larger parts of your logs, we can't understand much from just a single line.
I don't think this has to do with TLS settings at all ( at least not so far )

See https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html . is node-1 and node-2 resolvable hostnames locally ? if you don't set it yourself, Elasticsearch will search for nodes in ports 9300-9305 eitherway, so you might as well comment it out.

The cluster.initial_master_nodes setting is missing from the configs quoted above. If they're running ok individually then I'm guessing they auto-bootstrapped in the past and have formed separate clusters. See this note for more information and a resolution.

1 Like

Thanks @DavidTurner and @ikakavas.

@DavidTurner : Initially those two were running as 2 separate clusters. Now Im combining them into single cluster but its not happening. I added cluster.initial_master_nodes in node-1 then also they are behaving as different cluster.
Q1 : Do i need to delete all the data in nodes and start from fresh ? Please help me on how to proceed further. Thanks!

@ikakavas : Node-1 and Node-2 are resolvable host names. I tried by hardcoding the hostnames as well as IP addr but they are behaving as 2 separate clusters.

Please find the logs below :
Node-1 logs

[2019-08-29T06:38:27,161][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1] [controller/21946] [Main.cc@110] controller (64 bit): Version 7.2.0 (Build 65aefcbfce449b) Copyright (c) 2019 Elasticsearch BV
[2019-08-29T06:38:27,333][DEBUG][o.e.a.ActionModule       ] [node-1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-08-29T06:38:27,638][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen] and seed hosts providers [settings]
[2019-08-29T06:38:28,223][INFO ][o.e.n.Node               ] [node-1] initialized
[2019-08-29T06:38:28,224][INFO ][o.e.n.Node               ] [node-1] starting ...
[2019-08-29T06:38:28,324][INFO ][o.e.t.TransportService   ] [node-1] publish_address {192.168.0.13:9300}, bound_addresses {192.168.0.13:9300}
[2019-08-29T06:38:28,329][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-08-29T06:38:28,335][INFO ][o.e.c.c.Coordinator      ] [node-1] cluster UUID [AIO0MnzwT6aZVzz_OlKH_Q]
[2019-08-29T06:38:28,476][INFO ][o.e.c.s.MasterService    ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{V1hP1oonRwW6AN9NSyUXZw}{IXQh_D2URRWuFlQ6panGYw}{cisco.elastic.23.dev}{192.168.0.13:9300}{ml.machine_memory=33567051776, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 39, version: 488, reason: master node changed {previous [], current [{node-1}{V1hP1oonRwW6AN9NSyUXZw}{IXQh_D2URRWuFlQ6panGYw}{cisco.elastic.23.dev}{192.168.0.13:9300}{ml.machine_memory=33567051776, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-08-29T06:38:28,659][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{V1hP1oonRwW6AN9NSyUXZw}{IXQh_D2URRWuFlQ6panGYw}{cisco.elastic.23.dev}{192.168.0.13:9300}{ml.machine_memory=33567051776, xpack.installed=true, ml.max_open_jobs=20}]}, term: 39, version: 488, reason: Publication{term=39, version=488}
[2019-08-29T06:38:28,709][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {192.168.0.13:9200}, bound_addresses {192.168.0.13:9200}
[2019-08-29T06:38:28,709][INFO ][o.e.n.Node               ] [node-1] started
[2019-08-29T06:38:29,172][INFO ][o.e.l.LicenseService     ] [node-1] license [eaeb2d98-1d97-4116-97bf-1f3fc8ec8ebe] mode [trial] - valid
[2019-08-29T06:38:29,179][INFO ][o.e.g.GatewayService     ] [node-1] recovered [12] indices into cluster_state
[2019-08-29T06:38:30,076][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[coi_deals][0], [projectsummary][0]] ...]).

Node-2 Logs

[2019-08-29T06:38:20,435][INFO ][o.e.x.s.a.s.FileRolesStore] [node-2] parsed [0] roles from file [/opt/ma/tools/elasticsearch-7.2.0/config/roles.yml]
[2019-08-29T06:38:20,887][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-2] [controller/29864] [Main.cc@110] controller (64 bit): Version 7.2.0 (Build 65aefcbfce449b) Copyright (c) 2019 Elasticsearch BV
[2019-08-29T06:38:21,191][DEBUG][o.e.a.ActionModule       ] [node-2] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-08-29T06:38:21,543][INFO ][o.e.d.DiscoveryModule    ] [node-2] using discovery type [zen] and seed hosts providers [settings]
[2019-08-29T06:38:22,131][INFO ][o.e.n.Node               ] [node-2] initialized
[2019-08-29T06:38:22,131][INFO ][o.e.n.Node               ] [node-2] starting ...
[2019-08-29T06:38:22,263][INFO ][o.e.t.TransportService   ] [node-2] publish_address {192.168.0.7:9301}, bound_addresses {192.168.0.7:9301}
[2019-08-29T06:38:22,272][INFO ][o.e.b.BootstrapChecks    ] [node-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-08-29T06:38:22,282][INFO ][o.e.c.c.Coordinator      ] [node-2] cluster UUID [YPf57lqgT5a5QyL1MyUwjg]
[2019-08-29T06:38:22,428][INFO ][o.e.c.s.MasterService    ] [node-2] elected-as-master ([1] nodes joined)[{node-2}{PfvwFl5TRs6cUTI3Gk4IsQ}{Stc8tZ9eQmSbRsvriM28WQ}{cisco.elastic.07.dev}{192.168.0.7:9301}{ml.machine_memory=33567051776, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 32, version: 281, reason: master node changed {previous [], current [{node-2}{PfvwFl5TRs6cUTI3Gk4IsQ}{Stc8tZ9eQmSbRsvriM28WQ}{cisco.elastic.07.dev}{192.168.0.7:9301}{ml.machine_memory=33567051776, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-08-29T06:38:22,576][INFO ][o.e.c.s.ClusterApplierService] [node-2] master node changed {previous [], current [{node-2}{PfvwFl5TRs6cUTI3Gk4IsQ}{Stc8tZ9eQmSbRsvriM28WQ}{cisco.elastic.07.dev}{192.168.0.7:9301}{ml.machine_memory=33567051776, xpack.installed=true, ml.max_open_jobs=20}]}, term: 32, version: 281, reason: Publication{term=32, version=281}
[2019-08-29T06:38:22,665][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {192.168.0.7:9201}, bound_addresses {192.168.0.7:9201}
[2019-08-29T06:38:22,666][INFO ][o.e.n.Node               ] [node-2] started
[2019-08-29T06:38:22,878][INFO ][o.e.l.LicenseService     ] [node-2] license [12211812-689a-450c-b004-cfa52747e6d8] mode [basic] - valid
[2019-08-29T06:38:22,886][INFO ][o.e.g.GatewayService     ] [node-2] recovered [8] indices into cluster_state
[2019-08-29T06:38:23,739][INFO ][o.e.c.r.a.AllocationService] [node-2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[coi_deals][0]] ...]).

Please help me on this. Thanks!

Yes, the docs I linked above answer exactly this question:

... there is no way to merge these separate clusters together without a risk of data loss ... If you intended to form a single cluster then you should start again ...

@DavidTurner There is no much data in the clusters. Im ok with the data loss for now.

Q1 : Is there anyway without starting from fresh again to combine the clusters ? Im ok with data loss.

Q2: If i start from fresh again do i need to involve cluster.initial_master_nodes: ["node-1"] in both the nodes or any one of the node ?

Thanks!!

Yes, this too is covered in the docs I linked: take snapshots of them both first.

This is also covered in the page of docs I linked, further up:

WARNING: You must set cluster.initial_master_nodes to the same list of nodes on each node on which it is set in order to be sure that only a single cluster forms during bootstrapping and therefore to avoid the risk of data loss.

Thanks @DavidTurner. I will try and get back to you if i face any issues in between.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.