Assume the cumulative Index Size is approximately 200GB.
Do you mean that the total amount of data in the cluster will be 200GB, or that within the 30-minute window 200GB of data will change requiring a new backup?
Either way, generally speaking, the backup/snapshot process is fairly efficient on newer versions of Elasticsearch, as long as you have a good (fast) network connection to the backup destination, backups generally shouldn't cause many issues related to performance.
The total amount of data in the cluster say is 200 GB.
Within the 30-minute timeframe, assume a data of about 10-15 GB is ingested.
When I configure the snapshots to be taken/backed up to remote repository every 30 minutes, how significant would the impact on the performance of read/writes be during the snapshot backup process? I assume it's a separate single thread running that transports the backup, so I believe it shouldn't really affect the usual read/writes.
(Running on a 2vCPU and 8GiG of ram per node on a 3 node cluster.)