We have ~200tb snapshot stored on S3 now, and I am looking for someways to clean it up.
I read the official doc: Snapshots are taken incrementally. This means that when it creates a snapshot of an index, Elasticsearch avoids copying any data that is already stored in the repository as part of an earlier snapshot of the same index.
Let's say we have snapshot1 for Day1, and snapshot2 for Day2. Does this incrementally
mean snapshot2 is relying on snapshot1, where deleting snapshot1 makes snapshot2 unrecoverable?
If snapshot1 and snapshot2 are independent from disk storage perspective, can I simply delete snapshot1 once snapshot2 is created, which is covering 100% data of snapshot1 (assuming ES cluster only ingests, and does not delete data)?