so I can setup dup cluster to backup data near real time
Trying to replicate a master/slave configuration will only result in
frustration. There are steps that can get you close, but ultimately it is
better to simply have a heterogeneous cluster.
I cannot speak for the Elastic team, but I doubt such a setup is in their
plans. Not everyone has big data. Having two servers, a main and another
for high availability, is such a common practice in the RDBMS world.
Perhaps a quick tutorial on having a third master/non-data node will help.
I can speak for myself and what I've heard. We talk about how cool it'd have to have a changes API and actively work towards it. It's "high hanging fruit", meaning we have to do a ton of stuff to build the foundation of it, but we are doing those things. They just take a while and we aren't working on them exclusively.
One thing we've certainly discussed is using the changes API to stream changes from one cluster to another - kind of like a master/slave cluster. But this is all talk. Nothing is for sure.
Before I was an Elastic employee I worked on a system that had two Elasticsearch clusters in different data centers kept in sync by cloning the jobs that sent the updates to the Elasticsearch cluster. Because they were jobs we could shut down an entire elasticsearch cluster and they would just queue up. That was the plan. I never saw it in action because I left to work for Elastic. But I still think it is a good idea.