ElasticSearch backup techniques and strategies?

We are on ElasticSearch 1.7.4. Our index is 110GB in size and it is NOT time-based data.
So, there is a process which indexes data all the time, 24x7.
Apart from being document store, we execute rather simple queries and gets.

8 data nodes, 3 clients and 3 masters. Data nodes have locally attached SSDs.

We create snapshots on nightly basis. However, I tried restoring from snapshot on another cluster and it takes 4 hours to restore.

My thinking about backups is that we should be able to recover under 5 minutes.

Any strategies to be able to restore in reasonable amount of time? What other tools/techniques people are using beyond standard snapshot/restore which seems like usable for restoring time-based data.

Snapshot/restore is the right way to backup your indices, timestamp'd data or not.

4 hours seems like a long time to restore 110 GB to a new cluster, assuming dedicated gigabit ethernet writing to SSDs.

Are your restores throttled? I think (not sure) that by default in 1.x ES throttles restores to 40 MB/sec (max_restore_bytes_per_sec).