We have a need now for remote cluster replication. (Can't wait for native replication whenever that might be coming...) I don't need active active (yet) but I do need to replicate all the indices within the cluster to a remote cluster. The sources writing to Elasticsearch shouldn't need to know about where the remote, read-only clusters are. So I've been playing with the idea of using an index listener to watch for changes/deletions within each index. Those changes get written as a bulk message that gets dropped onto a kafka broker/topic. Kafka then replicates its topic log to the remote data center. From there a kafka consumer picks up the topic changes and pushes them to its local elasticsearch cluster.
I've written a little POC plugin that does this exactly. Obviously it assumes that the index mappings and settings are already in the remote cluster.
I'm just looking for some feedback on this approach.