Can't connect to remote cluster

Node 1: ES 7.14.0 (remote_cluster_client role is added to yml file)
Node 2: ES 7.13.2

I've added my remote cluster via API:

PUT /_cluster/settings
  "persistent" : {
    "cluster" : {
      "remote" : {
        "cluster_one" : {    
          "seeds" : [

When checking connection I get:

  "cluster_one" : {
    "connected" : false,
    "mode" : "sniff",
    "seeds" : [
    "num_nodes_connected" : 0,
    "max_connections_per_cluster" : 3,
    "initial_connect_timeout" : "30s",
    "skip_unavailable" : false

I verified firewall and connectivity between the clusters and everything is fine. Any idea where I could see some more detailed logs to debug this?

In the second cluster I don't see any transport port line so I guess it uses the default 3000.
I also see: ["",", "[::1]"]

Shall I bind to the ip of host of the first cluster too ?

If you set it overrides all those other network settings and will apply for both transport and http ports.

What do the Elasticsearch logs show?

And No... you are not trying to join the 2 clusters into 1 cluster...

Ok I commented out all that and only kept: [""]

From cluster 1 I try curl -XGET xx.xx.xx.xx:9200
I get 200 and the cluster status message, no issue.

Then I try curl -XGET xx.xx.xx.xx:9300

curl: (1) Received HTTP/0.9 when not allowed

Trying the same on the cluster 2 using curl -XGET and I get the same response, so obviously no connectivity issue.

I've tried to tail -f elasticsearch.log but no entries at all whenever I either call via curl, or activate the remote cluster synch on cluster 1

netatst -plnt on cluster 2 returns, so it listen to the right transport port:
tcp6 0 0 :::9300 :::* LISTEN 1939346/java

there is absolutely nothing in the logs. (/var/log/elasticsearch.log for the 7.13.2, /home/ubuntu/elasticsearch-7.14.0/logs/my-cluster-prod.log)
I tried monitoring the logs at the same time I was sending the api call to register the remote cluster, or even try to change settings from kibana remote cluster section, nothing happens in the logs of both machines

Nothing in syslog either on both machines

ok I finally found something in the logs:

[2022-07-27T10:19:51,798][WARN ][o.e.t.TcpTransport ] [elasticsearch-node-1] SSL/TLS request received but SSL/TLS is not enabled on this node, got (16,3,3,1), [Netty4TcpChannel{localAddress=/xxxx:9300, remoteAddress=/xxxxx:58478, profile=default}], closing connection

Cluster 1 has TLS enabled but not cluster 2

I've enabled ssl/tls on the other node but now I get:

[master-1] failed to establish trust with server at [<unknown host>]; the server provided a certificate with subject name [CN=instance] and fingerprint [xxx]; the certificate does not have any subject alternative names; the certificate is issued by [CN=Elastic Certificate Tool Autogenerated CA]; the certificate is signed by (subject [CN=Elastic Certificate Tool Autogenerated CA] fingerprint [yyyyy]) which is self-issued; the [CN=Elastic Certificate Tool Autogenerated CA] certificate is not trusted in this ssl context ([]); this ssl context does trust a certificate with subject [CN=Elastic Certificate Tool Autogenerated CA] but the trusted certificate has fingerprint [zzzz] PKIX path validation failed: Path does not chain with any of the trust anchors

Could it be that I generated the certificates the wrong way?
/usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /etc/elasticsearch/certs/elk-cluster-ca.p12 --days 3650

I could regenerate the CA and cert, but what should I put in instances.yml and hosts file for each machine?
Or I should I use the same certs in both machines?

I don't really understand how to do that:

  • Adding the CA certificate from the local cluster as a trusted CA in each remote cluster (see Transport TLS settings ).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.