I have a 3 node elastic cluster on production. Out of 3, 2 nodes are in Primary site and 1 Node is in DR site.
We are already using Commonvalut as our file system backup solution, but the Elastic documentation got us into back foot, which says following:
It is not possible to back up an Elasticsearch cluster simply by taking a copy of the data directories of all of its nodes. Elasticsearch may be making changes to the contents of its data directories while it is running, and this means that copying its data directories cannot be expected to capture a consistent picture of their contents. Attempts to restore a cluster from such a backup may fail, reporting corruption and/or missing files, or may appear to have succeeded having silently lost some of its data. The only reliable way to back up a cluster is by using the snapshot and restore functionality.
Assuming, we choose snapshot API with Shared File System Repository to take the snapshot, it requires the same shared filesystem to the same location on all master and data nodes.
Since our elastic search nodes are distributed across 2 datacenters, how can we achieve this? Can we provide the FTP path in path.repo location and expect it to work? or is there any other to achieve this?
any help would be appreciated.