Backup repository s3 bucket size is almost 5 times the actual index size

Are you creating your snapshots manually in a way that each snapshot will only have the data for the specific day or are you using some index pattern?

Because if you have 30 days retention for your indices, and you create a snapshot using SLM, it will create a snapshot with the data for 30 days, it will only upload data that was not present on other snapshots, but the segments for the other indices will still be referenced in that snapshot.

So when you delete a snapshot it will not delete everything because some of the segments are still being referenced by other snapshots.

The explanation on this older similar question seems to fit your case.

As an extreme example, if you have an index that is never written to, and you snapshot it every day, then each of those snapshots will refer to the same segement file.
If you delete a single snapshot, then it will recover a very small amount of disk space (cluster state, and metadata about the snapshot), but you would have to remove every single snapshot before the segment file was deleted.