Elasticsearch Cluster Setup using public ip on aws and google cloud

I have 1 instance in aws and 1 instance in google cloud. I am trying to make 2 node cluster between them. I am using following configuration on both nodes
Node-1:(aws):
cluster.name: abc
node.name: node1
network.host: 0.0.0.0 (unable to bind public ip)
discovery.zen.ping.unicast.hosts: ["node1 public ip", "node2 public ip"]
discovery.zen.ping.multicast.enabled: false

Node-2:(gcloud):
cluster.name: abc
node.name: node2
network.host: 0.0.0.0 (unable to bind public ip)
discovery.zen.ping.unicast.hosts: ["node1 public ip", "node2 public ip"]
discovery.zen.ping.multicast.enabled: false

But they are not detecting each other. please help me

First, I would definitely not do this. Having a cluster in 2 different data centers is a bad idea IMO.
Not sure what you want to do such a thing.

That being said, you need to make sure that ports 9300 are opened to the public (or to your other public IP).

And you did not give the version.

1 Like

Thanks david for your reply. Actually we are moving out of aws . i want to replicate the data on google cloud instance. so i am trying to add this(google instance node) to my 1 node cluster which is on aws. I am using 2.3.4 version and I opened 9300 port. Still doesn't works.

Use snapshot and restore.

That's definitely better to move your data from one cluster to another.

So backup on S3 with cloud-aws plugin. Install the same plugin on your gcloud instance and restore from there.

Better than opening ports IMO.

I will loose some data right?? I mean First i will take the snapshot of the cluster.Meanwhile some documents might get indexed. For that data again i have to take snapshot?? I am confused .please help me

Ideally you should stop injecting on AWS cluster.

That said, you can always snapshot after the first snapshot and restore again from GCloud cluster. Only new segments will be copied over.

A solution could be:

  • snapshot old index from AWS cluster to S3
  • restore old index in Google cluster from S3
  • send your write traffic to Google cluster instead of AWS in index named new
  • search in both indices new and old (note that you can use an alias)
  • snapshot old index again from AWS cluster to S3
  • restore old index again in Google cluster from S3

Then you will have a new cluster mixing old and new data on google cloud.

My 2 cents. I hope this helps.

1 Like

Thank you very much David