Multiple Data Centers Replication

Hi,

We figured that Elasticsearch doesn’t provide a proper solution for a multiple data centers replication.
I wonder if someone has a good other answer for this scenario.

Your help would be much appreciated:)
Nitzan.

Yeah, that isn't built in.

I've seen it done by queueing the required changes twice in the application
talking to ES. Once for each cluster. Like, in some message or job queueing
system. Then replay the queue for each DC into the elasticsearch cluster in
the DC.

This kept the two clusters totally independent of each other.

Thanks @nik9000!
We actually had in mind a component that will connect to a source ES cluster and will index all the data into a destination ES cluster.
I was just wanted to make sure that we're not inventing the wheel here.

Here is the blog entry by Elastic summarising the available options.

I am currently working on a solution using snapshots, mainly because using an intermediary like RabbitMQ or Kafka would be a major redesign for the application concerned.

You are but you aren't at the same time. Lots of folks working on Elasticsearch have been thinking about this problem for a long time but the general solution to it is still a long ways off. So in that way you are reinventing the wheel.

But you probably can solve the problem for how you use Elasticsearch much faster than we can solve it for all the uses. So in that way you aren't. This kind of thing happens all the time.

The blog @bvoros points to seems good as well. If you can take the latency on the two DCs then snapshot/restore is great. It should be much more performant than running the writes on both sides. The neat thing about snapshots is that you can actually take them very frequently on many workloads and they aren't that big a deal. Like you could snapshot every 30 minutes. I don't know about restoring every 30 minutes, I haven't tried that one.

I tested snapshotting and restoring every 20 minutes offset by 10 minutes into S3 and it worked reasonably well.
0, 20, 40 - snapshot
10, 30, 50 - restore.
The primary cluster and S3 bucket was in Europe and the replica cluster in Oregon.

S3 snapshot repositories are slow the more snapshots there are, so i ended up rolling the snapshots with a curator and deleting any that were over 24 hours old.

All this with actual data being indexed into ES. It was logstash sending a subset of our daily logs, about 5 GB of data daily for the duration of the test.

So yeah, snapshotting is great if you are OK with that 20 minutes.

I'd be far more comfortable if you could kick the restore process off after the snapshot process finishes rather than on a timetable. Same deal with snapshotting, in case a snapshot runs long. But other than that I think that seems like a sound plan.

Have you run the numbers on the costs of the operations and data? Snapshots weren't designed to minimize operations so I'd double check that a process like you've outlined isn't costing more than you'd expect.

On the restore side the process simply looks up the most recent "successful" snapshot, an overrunning snapshot will have no effect on the restore. Apart from the restore missing a beat. Ended up doing it this way because again the S3 repository is so slow when it comes to listing snapshots and returning snapshot details when there are a largish number of snapshots.

I am about to test snapshotting into an nfs share, that will hopefully enable what you described.

Cool. Just don't put the data directory in NFS and everything should be fine. Snapshotting to a shared filesystem ought to work well so long as the filesystem works well. NFS has a history so be careful with it!

Thanks guys, it really helped for our discussion :slight_smile:.
We're probably going to use the snapshot and restore mechanism.