Apologies if this has been answered before; I've have a look at the
docs and archives but may have missed something. We've got a single
index on 0.18.6 running on an EC2 cluster that we want to back with
the s3 gateway. We generate a few GB of documents a day at a regular
pace (no huge spikes), and set a TTL of 15 days on all documents. As
such, we're hoping that the size of the data stored in s3 will remain
reasonably constant over time. Two questions:
1 - is that much data too much to hope to push to s3 all the time?
2 - will elasticsearch remove old data from s3 as it becomes unneeded?