Hi,
I am trying to setup a snapshot repository with AWS or google cloud. There is a tradeoff between cost for requests and storage cost. I wanted to know whether there is a way I could guess the amount of requests elasticsearch would make to the cloud in order to create an incremental snapshot. I read it in one of the forums that it would depend on the segments. I get the segment count as -- "count" : 27828, will this mean minimum these many requests would be required for every incremental snapshot or elasticsearch could do this operation in bulks. The cost of operations performed is a major factor for us to determine which cloud to proceed with. Does it just downloads the metadata for this information?
Another question I had was whether I can copy the entire contents of a repository, setup a new repository move these contents to the new repository, will I be able to restore from this new repository which contains a copy of my old repository.
Example, I have repository A which has all the backups I need. I create a repository B, copy and paste the contents of A to B. If I restore from this B repository will it be corrupted data?