Remote node cluster join


I've a 3 node cluster on my local network, it works like a charm. I just want to add a 4th node, but on another nework.

But i'm not able to perform the discovery, my local master is well discovered by the remote node.
The handshake is OK, but after my local node use his local IP...

[WARN ][o.e.d.HandshakingTransportAddressConnector] [Remote-NODE] [connectToRemoteMasterNode[{PUBLIC_IP:PORT}]] completed handshake wiith [{Local-NODE}{MyJxF48kRp-TgZml8tTXXw}{e8qZKSsYRBi_DonDE_t0DA}{LOCAL_IP}{LOCAL_IP:PORT}{cdfhilmrstw}{ml.machine_memory=8319733760, ml.max_open_jobs=20, xpack.installed=true, ml.max_jvm_size=4294967296, transform.node=true}] but followup connection failed.
org.elasticsearch.transport.ConnectTransportException: [Local-NODE][LOCAL_IP:PORT] connect_timeout[30s]

Here is my discovery config on the remote node :

discovery.seed_hosts: ["", "PUBLIC_IP:PORT"]

I tried to play with transport.bind_host or transport.publish_host on the local node, but it crash all my local stack...

Any idea ?

Thanks in advance,


It'd help if you could post the logs and your config :slight_smile:

All nodes must be accessible to all other nodes at their publish address, which is written as {LOCAL_IP} in the log message you shared. It's not possible to have nodes which appear to have different addresses depending on whence you're talking to them: all the nodes need to be on the same network.

1 Like

Hi Warkolm, David,

Thanks for your answers.
So it's not possible to make nodes talking together if they are not on the same subnet ?
The only alternative is Cross Cluster Replication ?

The goal is to have decentralized node for HA...

A decentralised node would not give you HA. A majority of master nodes always need to be available to achieve HA which means you would need to distribute your cluster across 3 zones/data centres.

1 Like

The idea is to have 2 nodes onsite on 2 differents physical server and 1 offsite.
I've already 3 nodes onsite, it works as expected.

That's not quite true, at least, it depends what you mean by "subnet". You need to configure your network so that it looks the same from every node's point of view.

I mean 2 differents datacenters for example...

OK, so it means that if I want to achieve that, i need to configure all my node with a public IP/Port as transport.publish_host.
At the end, my local node will have to perform a re-entry connection to talk ?

This is not a recommended approach, because latency between datacentres can cause unexpected issues with indexing and queries and general cluster activities.

OK, so what is the good one ? :slight_smile:

That's the simplest option, but you can also do fancier things with a VPN or NAT too. That level of network engineering isn't really in scope for this forum, I expect you would benefit from finding some networking experts to ask instead.

Also as Mark observes you are putting your cluster stability in the hands of the cross-data-centre link. If it is unreliable or slow then your cluster will be unreliable or slow too.

I'm familiar with the networking stuff, that's my job :wink:
But the main idea was to not create a monster, if Elastic was able to do it without doing such a thing !

And if the perfs can become worth, it make non sense...
Cross Cluster Replication seems to be the right way... But not possible with a basic licence :frowning:

:slight_smile: ok you probably know a good deal more than I do about this then.

CCR is another option indeed, depending on what exactly you're trying to achieve. There's a good summary of what Elasticsearch needs for resilience in these docs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.