Using the local gateway, and backing up the data directory seems to be an
often recommended and easy solution, but is this feasible as a cluster
grows (without a central coordinator)? Aren't there inherently going to be
timing issues when flushing, or setting ' translog.disable_flush: false'
from each node across the cluster?
I'm not clear on whether it is possible to script this independently on
each node, coordinating somehow within the semantics of the elasticsearch
cluster (using a simple script such as this
https://gist.github.com/1074906), or should I start with a single system to
direct the backups cluster-wide?
Also, this particular cluster isn't on AWS, otherwise I would probably use
the S3 gateway.
Thanks
-jim