Can ECK be configured to allow all nodes to join an existing cluster?

Our k8s network is connected to other physical and virtual machine networks through the BGP protocol, allowing direct communication between them. Can ECK be configured to allow all nodes to join another cluster? So that the cluster on physical machines can be temporarily expanded.

I also hope to smoothly migrate the physical cluster to ECK using a similar method, where nodes with the same number and roles are created in ECK and added to the existing physical cluster. Once the data has been migrated to the nodes in ECK, the nodes in the physical cluster can be taken offline. Is this possible now?

I have tried configuring only data nodes, but it throws an error saying "Required value: Elasticsearch needs to have at least one master node."

Hello,

ECK does not natively support this scenario today.

Theoretically you could achieve this by pausing Elasticsearch nodes (see Troubleshooting methods | Elastic Cloud on Kubernetes [2.11] | Elastic) and running the node-tool but it seems dangerous to me to go that route.

Since snapshots are taken incrementally, you can also perform a sort of live migration by following this process:

  • t0 create cluster B
  • t1 snapshot cluster A data
  • t2 restore cluster A data to cluster B
  • t3 switch your trafic clients to use cluster B
  • t4 snapshot again cluster A to get the missing docs indexed between t1 and t3
  • t5 restore latest snapshot to cluster B

Another alternative is to reindex using remote cluster (see Migrate from a self-managed cluster with a self-signed certificate using remote reindex | Elasticsearch Service Documentation | Elastic).

Hope this helps.

Thank you very much. I tried the following method to achieve cross-cluster scaling and migration, although it is not a secure practice:

  1. Generate a shared certificate file and create a separate secret for tls.crt, tls.key, and ca.crt. All k8s ES instances use this certificate.
  2. Create an ES instance named test-1. Configure "cluster.name: test" in "spec.nodesSets.config", mount the certificate from the secret, and specify the use of these certificate files in the configuration.
  3. Find the master pod IP in test-1, create an ES instance named test-2, and configure discovery.seed_hosts: [ "list of master pod IPs in test-1" ] in the config.
  4. Call api "_cluster/settings" and set "persistent.cluster.routing.allocation.exclude._name" to "test-1-es-data-0, test-1-es-data-1".
  5. Set "kubectl annotate es test-1 eck.k8s.elastic.co/suspend=all_pods_in_test-1" to stop all test-1 pods.
  6. Remove the discovery.seed_hosts configuration from test-2.yaml and apply kubectl -f test-2.yaml.
  7. Delete test-1.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.