How to setup external remote cluster?

Hello here,

ECK added support to local remote cluster but how to manually add a remote cluster ? (both are running through ECK) How should we retrieve certificate and inject these into the other cluster ?

Thanks if someone can help :slight_smile:


We have some preliminary documentation here that should get your started.

I said preliminary because this is for the upcoming 1.1 release of ECK and still work in progress and might change.

Thanks @pebrc, I will try this and let know here if I succeed :slight_smile:

I followed the guide:

Its works correctly when the elasticsearch cluster is inside the same cluster k8s.

But I have issues to connect to an external cluster.

  • Can we expose the transport service 9300 behind a proxy http (nginx-ingress controller in this case) ? (I tried without success)

  • I saw that this transport service seems only used for discovery, after the "master" elasticsearch seems to establish direct connection to each node of the cluster (on port 9200 I suppose ?) . But if the cluster is external, we cannot expose directly the IP of each node, we need to expose these behind a service/load balancer. Is it correct, what is your suggestion ?

Thanks again

I haven't tested this set up at all so it might not be useful information, but if you do need to set up a service selecting a single pod, the stateful set controller creates a label you can use. For example

Thanks for this answer but my issue is not really on selecting a single pod. It is about exposing the service behind an http proxy and how masters connect to nodes of an external remote cluster.

Ingress is by definition HTTP only, but we need a TCP "ingress" so to speak for Elasticsearch's transport layer. There are a couple of workarounds though:

  • if you are running on one of the hosted k8s offerings e.g. Google's GKE then by far the easiest option is to set the type of the transport service to LoadBalancer and you can expose the TCP service in that way
  • if you have to use ingress-nginx then you have to use the proprietary Nginx feature that allows exposing TCP services. See

If you look at the link to the documentation draft I shared earlier you will see that we setup the remote cluster connection with mode: "proxy" this is a new feature in 7.6 that was built exactly for scenarios like this one where we cannot route to every node in the target cluster directly but only indirectly through a k8s Service/LoadBalancer.

I was able to succeed setup external cluster, thanks a lot @pebrc

I am getting sometimes errors about "[indices:data/read/search] disconnected"]" although the proxy tcp load balancer on the remote cluster seems still up.

  • In the remote cluster, on the transport service k8s, should I select only some kind of nodes ? (the data nodes ?)
  • Is there some timeout options to increase ?

I setup transport.ping_schedule and it seems better now, I will monitor.