Thanks for the follow up explanation.
What you describe is a scheduled re-run of a batch transform. I think there is little benefit of trying to run this as a continuous transform as you describe. You could instead re-create the transform every day, e.g. by re-creating and re-running it. The next release will make this easier as it will provide a reset API. Currently you have to delete transform and destination, re-create the transform and start it yourself. With reset this can be reduced to 2 API calls. We will improve the use case of re-running a batch transform in further releases.
However I have a better suggestion: The destination of a transform is an index and works as any other index. You can query that index and further aggregate on it. You could use transform to rollup the data into e.g. 1 day buckets. To retrieve your 7 day value you run another aggregation on the transform destination. This query will be lightning fast, because it only has to aggregate on a tiny amount of documents. I would actually make this more fine granular and e.g. bucket in 10 minutes intervals. The trade off is speed vs. size. It basically depends on your amount of data, you have to try yourself and find the right balance.
If you think about the suggested approach you might want to look into rollup as alternative to transform.