In March of last year there was discussion of supporting replication
across datacenters
A single cluster that spans two data centers, with special
allocation strategy that make sure that a shard and its replica do not
exists on the same data center. And that read / search prefer "local"
data center shards then going to search on another data center.
Yes, this is still planned for future release. Note, this solution is good for DCs (or sites) that have a speedy connection. For DCs that are connected over a slow network, a different solution will need to be implemented to support it in a built in fashion.
On Wednesday, May 4, 2011 at 8:33 PM, Bob wrote:
In March of last year there was discussion of supporting replication
across datacenters
A single cluster that spans two data centers, with special
allocation strategy that make sure that a shard and its replica do not
exists on the same data center. And that read / search prefer "local"
data center shards then going to search on another data center.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.