Centralized Kibana to fetch logstash data from multiple DC

Hi everyone :

I got 3 data centers where each one got its own kibana, logstash, and ES cluster with 3 ES nodes.Each DC got its own CA and certiticates which are distributed to all ES nodes.

So ourl goal is to get centralized kibana to access the logstash data from above 3 data centers, instead of managing separately.

so to achieve this goal , my plan is to use Remote cluster with Cross-cluster search. And open 9301 port bidirectionally from all 3 DCs to the centralized server.

To achieve above objective , I got few questions to begin with :

  1. On centralized server - do we need to install Kibana, logstash and Elasticsearch ?

  2. To use CCS with remote cluster on each of those DCs, do we need to have same CA and certs across all the DCs ?

  3. if yes to 1, do we need to copy the same CA and certs from DC to the centralized server ?

Please advise.

Could someone please help me out ?

So I got my Kibana and elastic running on my centralized server. My networking team got port 9301 opened bidirectionally between centralized kibana and one of the DC logstash server. But when I try to create remote cluster with DC logstashserver IP as seednode with 9301 port, I am getting below connection exception error. Is there anywhere else we need to mention the port 9301 to connect to the DC logstash server ? Could you please advise how to fix this issue ?

[2023-09-25T18:05:58,373][INFO ][o.e.c.s.ClusterSettings  ] [ES-centrKibana-node-1] updating [cluster.remote.EU-remote-cluster-elast.mode] from [SNIFF] to [sniff]
[2023-09-25T18:05:58,373][INFO ][o.e.c.s.ClusterSettings  ] [ES-centrKibana-node-1] updating [cluster.remote.EU-remote-cluster-elast.skip_unavailable] from [false] to [true]
[2023-09-25T18:06:32,252][INFO ][o.e.c.s.ClusterSettings  ] [ES-centrKibana-node-1] updating [cluster.remote.EU-remote-cluster.seeds] from [[]] to [["50.22.343.200:9301","50.22.343.201:9301"]]
[2023-09-25T18:06:34,260][WARN ][o.e.t.SniffConnectionStrategy] [ES-centrKibana-node-1] fetching nodes from external cluster [EU-remote-cluster-elast] failed
org.elasticsearch.transport.ConnectTransportException: [][50.22.343.202:9300] connect_exception
	at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1119) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:115) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:149) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:147) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:62) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
	at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution
	at org.elasticsearch.common.util.concurrent.FutureUtils.rethrowExecutionException(FutureUtils.java:80) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:72) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:112) ~[elasticsearch-8.6.2.jar:?]
	... 20 more
Caused by: java.util.concurrent.ExecutionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection timed out: no further information: /50.22.343.202:9300
	at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.getValue(BaseFuture.java:257) ~[elasticsearch-8.6.2.jar:?]

1st what version for each cluster?

Can you show the configuration not just the results

Why port 9301... not normal 9300?

And assume you read this ... yes all clusters need trust each other with respect to TLS

Can you telnet the remote IP and port from your central server?

Thanks Stephen for your response.

  • Version -> 8.6.2
  • I tried with 9300 as well. Same result of connection issue.
  • For trust each other task - > As I am testing out with one DC, I have copied the CA used in that DC and created the certificates for the elastic and kibana on centralized server.
  • if the port is open, I am able to telnet to the remote IP and port from central server.

And please let me know which configuration, you want to take a look ?

So

  • Version -> 8.6.2

OK

  • I tried with 9300 as well. Same result of connection issue.

You should know which port transport is running on 9300 is the default, if you did not change it then that is what it should be.

  • For trust each other task - > As I am testing out with one DC, I have copied the CA used in that DC and created the certificates for the elastic and kibana on centralized server.

per the docs this means they need each other's CA. Did you do that?

All connected clusters must trust one another and be mutually authenticated with TLS on the transport interface. This means that the local cluster trusts the certificate authority (CA) of the remote cluster, and the remote cluster trusts the CA of the local cluster. When establishing a connection, all nodes will verify certificates from nodes on the other side. This mutual trust is required to securely connect a remote cluster, because all connected nodes effectively form a single security domain.

  • And please let me know which configuration, you want to take a look ?

You will need to share the complete elasticsearch.yml for for both the local and remote clusters

  • if the port is open, I am able to telnet to the remote IP and port from central server.

Not sure if that is a statement or a question

Let me try to answer your question 1 by 1.

Q) You should know which port transport is running on 9300 is the default, if you did not change it then that is what it should be.

I did not changed anywhere for the port 9301. Is there any place to add or update the port for remote cluster to work? Only thing I asked n/w team was to open 9301 for remote cluster connection to the central server. And that port is opened bidirectionally.

Q) "This means that the local cluster trusts the certificate authority (CA) of the remote cluster, and the remote cluster trusts the CA of the local cluster."

can u pls advice - what steps I need to do to get the remote clusters trusts the CA of the local cluster. What I have done so far is copied the CA used in that DC (remote cluster) to the central server and created the certificates for the elastic and kibana on centralized server.

Q) the last one regarding " if the port is open, I am able to telnet to the remote IP and port from central server."

SInce there was nothing listening on either side, I tested with PortListener - a light weight utility to listen on 9301. That is what I meant by port is open.

Why did you ask for FW port 9301 what lead you to that? Did you run netstat or something... I am not clear why you think elasticsearch transport is running on 9301.... it can be but only if you set it... or something else is running on 9300.

So how did you come up with 9301 on both local and remote.

You can see in the error that the local is 50.22.343.202:9300

You can run the following in Kibana Dev Tools

GET _cat/nodes?h=id,po,ip,http&v on each cluster and get the following po is the transport port

GET _cat/nodes?h=id,po,ip,http&v`
# Result
id   po   ip         http
Wkt7 9300 172.19.0.3 172.19.0.3:9200

And the most important to share both the remote and local elasticsearch.yml

The entire file you can anonymize passwords etc.

Look like I misunderstood when the document said the range of 9300:9400 for the transport. The GET _cat/nodes?h=id,po,ip,http&v give me the PO as 9300 on both local and remote. so I will use 9300 as port number going forward. But as I mentioned earlier, I already tried with 9300 and getting same connection exception error.

Below is the elasticSearch.yml for local (central) server.

 
ingest.geoip.downloader.enabled : false

action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,*
 
cluster.name: ES-central-kibana
 
node.name:  ES-centrKibana-node-1
 
path:
  data:
    - "D:\\ElasticSearch\\Database\\elasticsearch"
 
path.logs: "D:\\ElasticSearch\\Logs"
 
 
network.host: 50.22.343.199
 
http.port: 9200
 
cluster.initial_master_nodes: ["ES-centrKibana-node-1"]

 
xpack.security.enabled: true
 
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: C:\elastic\elasticsearch-8.6.2\config\certs\elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: C:\elastic\elasticsearch-8.6.2\config\certs\elastic-certificates.p12

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: C:\elastic\elasticsearch-8.6.2\config\certs\http.p12
 

and below is the remote elasticsearch.yml -
FYI : remote ES are in 3 different nodes joined to become cluster - not with remote logstash/ kibana server.

 
ingest.geoip.downloader.enabled : false

action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,*
 
 
cluster.name: remoter.cluster.prod
 
node.name: remote-Node-01
  
path:
  data:
    - "F:\\ElasticSearch\\Database\\elasticsearch"
 
path.logs: "D:\\ElasticSearch\\Logs"
 
# Lock the memory on startup:
#
bootstrap.memory_lock: true
# 
network.host: 50.22.343.216
 
http.port: 9200
 
discovery.seed_hosts: ["50.22.343.216", "50.22.343.217", "50.22.343.218"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["remote-Node-01"]
  
xpack.security.enabled: true
 
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: C:\elastic\elasticsearch-8.6.2\config\certs\elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: C:\elastic\elasticsearch-8.6.2\config\certs\elastic-certificates.p12

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: C:\elastic\elasticsearch-8.6.2\config\certs\http.p12

xpack.notification.email:
  default_account: smtp_account
xpack.notification.email.account:
    smtp_account:
        profile: standard
        smtp:
            auth: false
            host: SMTP1.####.com
            port: 25

Ok Progress but I am pretty sure there is a fundamental issue unless I am missing something.

You are missing this part from here

All connected clusters must trust one another and be mutually authenticated with TLS on the transport interface. This means that the local cluster trusts the certificate authority (CA) of the remote cluster, and the remote cluster trusts the CA of the local cluster. When establishing a connection, all nodes will verify certificates from nodes on the other side. This mutual trust is required to securely connect a remote cluster, because all connected nodes effectively form a single security domain.

a) What this means is that you need to add the transport CA from the local cluster to the remote cluster and vice versa .. i.e. both ways. How much of a cert expert are you? or do you have a cert expert around this stuff is harder with .p12 s than .pem s

b) OR and this will only work for 1 Pair take the CA you used to generate the transport .p12s for the remote cluster and use IT to generate the transport .p12 certificates on the Local (central cluster) then it should work because the two clusters will have a shared CA.

I would try B) first see if you can get it to work....

Then we can talk about A... which you will either need to change .pem (because it is easy to add CAs) or you are going to need to learn about some low level of details for .p12s

This is self-signed cert stuff... always painful till you learn it.

In my humble opinion, this is all easier with .pem because the CAs are just added as a list.

Yes, it is self signed cert.

I think option B is what I was following . What I have done so far is copied the CA (elastic-stack-ca.p12) from remote cluster to the local ( central) server and then generate transport certs used int the elasticsearch.yml and "elasticsearch-ca.pem" for kibana.yml

Is that correct ?

Ok that is good

That does not really matter...

What OS and what is the latest error?

How did you configure the Remote Clust via the UI or .yml?

Windows Server 2019

I used UI to create Remote cluster. Below is the request.

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "EU-remote-cluster": {
          "skip_unavailable": true,
          "mode": "sniff",
          "proxy_address": null,
          "proxy_socket_connections": null,
          "server_name": null,
          "seeds": [
            "50.22.343.200:9300",
            "50.22.343.201:9300"
          ],
          "node_connections": 3
        }
      }
    }
  }
}

Below is the latest error :

[["50.22.343.200:9300","50.22.343.201:9300"]]
[2023-09-26T16:26:52,577][WARN ][o.e.t.SniffConnectionStrategy] [ES-centrKibana-node-1] fetching nodes from external cluster [EU-remote-cluster] failed
org.elasticsearch.transport.ConnectTransportException: [][50.22.343.201:9300] connect_exception
	at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1119) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:115) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:149) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:147) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:62) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
	at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution
	at org.elasticsearch.common.util.concurrent.FutureUtils.rethrowExecutionException(FutureUtils.java:80) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:72) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:112) ~[elasticsearch-8.6.2.jar:?]
	... 20 more
Caused by: java.util.concurrent.ExecutionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /50.22.343.201:9300
	at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.getValue(BaseFuture.java:257) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:231) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:53) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:65) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:112) ~[elasticsearch-8.6.2.jar:?]
	... 20 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /50.22.343.201:9300
Caused by: java.net.ConnectException: Connection refused: no further information
	at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
	at sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973) ~[?:?]
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[?:?]
	... 7 more
[2023-09-26T16:26:52,593][INFO ][o.e.c.s.ClusterSettings  ] [ES-centrKibana-node-1] updating [cluster.remote.EU-remote-cluster.skip_unavailable] from [false] to [true]
[2023-09-26T16:28:22,666][INFO ][o.e.c.m.MetadataMappingService] [ES-centrKibana-node-1] [.kibana_8.6.2_001/2GSMuIhSSr2GeLAB5Vt6gw] update_mapping [_doc]
[2023-09-26T16:28:25,207][WARN ][o.e.t.SniffConnectionStrategy] [ES-centrKibana-node-1] fetching nodes from external cluster [EU-remote-cluster] failed
org.elasticsearch.transport.ConnectTransportException: [][50.22.343.201:9300] connect_exception
	at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1119) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:115) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:149) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:147) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:62) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) ~[?:?]
	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
	at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution
	at org.elasticsearch.common.util.concurrent.FutureUtils.rethrowExecutionException(FutureUtils.java:80) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:72) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:112) ~[elasticsearch-8.6.2.jar:?]
	... 20 more
Caused by: java.util.concurrent.ExecutionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /50.22.343.201:9300
	at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.getValue(BaseFuture.java:257) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:231) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:53) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:65) ~[elasticsearch-8.6.2.jar:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:112) ~[elasticsearch-8.6.2.jar:?]
	... 20 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /50.22.343.201:9300
Caused by: java.net.ConnectException: Connection refused: no further information
	at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
	at sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973) ~[?:?]
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[?:?]
	... 7 more

That does not look like an SSL Connection Error now ... looks like plain old connectivity, that is not to say that you do not have SSL but that looks like connection blocked / refused.

Connection refused: no further information: /50.22.343.201:9300

from the local try

$ telnet 50.22.343.201 9300

As I mentioned earlier, telnet to remote IP works only , if I use "portListener" utility on remote server to listen on the port 9300. Otherwise getting "Could not open connection to the host, on port 9300: Connect failed" .

I used that utility to make sure it is not an networking issue.

Hi @Kvoyce2023

Not sure what to tell you...

I don't know what utility you're referring to but if you cannot telnet directly and If you try to telnet directly and you're basically getting the same connection error That's in the elasticsearch log. It seems that you still have the connectivity issue.

That's what the error log is saying, the local cannot connect at all to that remote node IP:PORT

It's not saying it's connected and can't do SSL,

It's not saying it's connected and can't authenticate.

It's saying it flat-out can't connect.
, specifically to 50.22.343.201:9300 perhaps take that out and only leave "50.22.343.200:9300" and set nodes to 1 turn off sniff mode, or perhaps the connectivity works for 1 node and not the other.

It certainly can be something else, but I cannot tell you how many times I've been assured there's no network connectivity issue and they're still is.

If you cannot, telnet to the remote port IP and Port from the local host without and extra "tools" I would not expect it to work.

when I telnet I see this, the is to an SSL enabled transport node

$ telnet 192.168.86.90 9300
Trying 192.168.86.90...
Connected to 192.168.86.90.
Escape character is '^]'.

that is the transport IP and Port no extra tools needed.

You could try
network.host: 0.0.0.0

Thanks @stephenb.

So as you know telnet just establishes a connection from one server to another over the specified port. If nothing on the other side is listening, e.g. an application or service, then telnet will fail even if the port is open, right ? So I used "portlistener" utility to confirm the port is open. So even after I am able to telnet to the remote IP via 9300 , I am still getting this connection exception for remote cluster. Not sure if any configuration or any other service need to be running or listening on the port 9300 ? My centralized server got only kibana and elasticserarch running.

Anyway as suggested by you I tried the below with one IP, without sniff and set to 1 node.

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "cluster-remote-1": {
          "seeds": [
            "50.22.343.200:9300"
          ],
          "node_connections": 1
        }
      }
    }
  }
}

Below is what I got for "GET /_remote/info"

{
  "cluster-remote-1": {
    "connected": false,
    "mode": "sniff",
    "seeds": [
     "50.22.343.200:9300"
    ],
    "num_nodes_connected": 0,
    "max_connections_per_cluster": 1,
    "initial_connect_timeout": "30s",
    "skip_unavailable": false
  }
}

"sniff" acts as default. But still not connected.

Yup sorry, you are correct on the sniff more, apologies.

I do not have an answer for you at this time... this message is pretty clear...

This is actively refused, not even timeout ... again I could be wrong

Connection refused: no further information: /50.22.343.201:9300

A couple of things you can try...

set the setting I showed above on both sides
network.host: 0.0.0.0

Set the debug to trace on both sides you can do that via the API, cluster does not need to be restarted takes affect immediately

You should also look in the logs on the remote side and see if there is an error... if the local actually connects to the remote and then fails due to roles or something, you should see something on the remote elasticsearch logs.

PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.transport": "trace"
  }
}

# To clear 
PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.transport": null
  }
}

Set up a single node elasiticsearch on the same subnet as the the remote and see if you can connect

Good luck something basic here ... when you get past this let us know

Hi @stephenb :
I am able to get local cluster connect to Remote cluster. Also I am able to connect to single ES node from remote cluster. both are on same subnet.

Only pending item is I am still not able to connect from remote cluster to local.

I have added the "trace" setting as you mentioned above. But the ES log file on remote or local are not creating any messages for the remote cluster I created. Is there any specific file or variable I need to update ? So far I have updated the log4j2.properties "rootLogger.level" and "logger.xpack_security_audit_logfile.level" to "debug".

So you got a remote and local to connect on a local subnet is that correct could you do a cross cluster search etc?

Not as far as I know... I just set that ... you should see you can always just run elastic in the foreground...

When I connect to a remote server I see messages like

This is transport debug... actually I would set to debug not trace... trace puts out too much

[2023-10-05T09:34:30,087][DEBUG][o.e.t.TcpTransport       ] [node-1] opened transport connection [30] to [{stephenb-k8s#XX.XX.XXX.XXX:9400}{589uBnNySzKRQnJdRe4xEA}{sdfgsdfg33e8c82172f79c.us-west1.gcp.cloud.es.io}{XX.XX.XXX.XXX::9400}{IScdfhilmrstvw}{7.17.0}{server_name=c957ac2fbd1f4a00a233e8c82172f79c.us-west1.gcp.cloud.es.io}] using channels [[Netty4TcpChannel{localAddress=/192.168.2.107:60996, remoteAddress=asfdasdfsadfc82172f79c.us-west1.gcp.cloud.es.io/XX.XX.XXX.XXX::9400, profile=default}]]