Use azure fileshare as data storage for Multiple ES Clusters running on K8s

Is it possible to use an Azure File Share(similar to NFS) for data storage of multiple ES Clusters running on K8s? So that both the clusters can write/read from a common location. Can think of it as a cross-geo HA solution.

Welcome!

It's absolutely not recommended to store data on network drives and to split a cluster across multiple regions.

Instead use the cross cluster replication feature (which I think is a commercial feature though).

Thanks @dadoonet.

Point noted.

The goal is to achieve a live migration with 0 downtime, CCR Feature guarantees that? Like a seamless migration.

Live migration of what?
Why across multiple regions ?

(Long post alert!)
If not across multiple region, is the below requirement achievable via CCR Feature on a single region?

Requirement:
A K8s Cluster(ECK_CLUSTER_A) running with ES deployed with Data pods running on 3 different zones in a specific region (lets say West Europe). Lets say this cluster has some data stored.
Spin up a new cluster(ECK_CLUSTER_B) on same region (West Europe) and deploy ES having the same configuration as above).
Scenario 1: How do I store the data, so that its accessible from Both the clusters.?
Scenario 2: In case ECK_CLUSTER_A gets destroyed, how can I access the same data from the newly spun Cluster ECK_CLUSTER_B?

Correct me if I am wrong but I guess the scenario 2 portrays somewhat like a live-migration of the ES service with no downtime.

Every Elasticsearch node need a dedicated area to store data, which can not be shared with other nodes inside or outside the cluster. This area usually also stores cluster state so can not be brought up on a node within a different cluster. Both scenario 1 and 2 are therefore unlikely to work.

CCR is probably the easiest way to set this up, but before CCR was available a common method was to set up two separate clusters and write to them in parallel so they all receive the same data and stay in sync, often by using a message queue to allow for buffering in case of failures.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.