I'm attempting to snapshot my logstash files on EC2 and back them up to an S3 repository. I only have a single dev EC2 instance with dummy data (my prod has multiple instances). I only have a single node for this repo (as confirmed by the `curl -XPOST 'http://localhost:9200/_snapshot/snapshot_name/_verify' command. When I tell curator to make the snapshot, it appears to run correctly, but the disk usage on the S3 is HUGE relative to the original logstash.
For instance, for a given EC2 logstash directory (/logstash-2015.09.28/) that has a du of 164KB, the du on S3 (s3://mys3bucket/snapshots/elasticsearch_backup/indices/logstash-2015.09.28/) is 305MB!
I assume this is not expected behavior? What could be causing this and how do I fix it?