Welcome to the forum @Zoree
It's always helpful to include as much as possible info on your setup, e.g. how many nodes, what hardware spec of what resources allocated to the nodes, what version of elasticsearch, a simple one-sentence idea of what your cluster does (logs, security, whatever), ingest pattern, average document sizes, ...
My understanding of what you wrote is you have a number of indices (how many is not given), sizes which average around 200GB per index, so some bigger and some smaller, and each index is one primary shard, and an unknown number of replica shards. And you have also tried to bulk ingest 25.1GB of data in one call, which has failed as its bigger than some elasticsearch limit. Thats an error.
If I've understood wrong, please correct me.
If you want to get past the error without changing anything, then break it up into smaller chunks, both now and on an ongoing basis. Personally, I think that would seem like a sensible thing to do anyways.
The limits can be seen with
curl -sk -u USER:PASSWORD https://ESHOST:9200/_nodes/stats/breaker
I think there is a way to increase the specific limit, but I'd rather know more about what you are doing before going there.
At
there's a section titled:
"Aim for shards of up to 200M documents, or with sizes between 10GB and 50GB"
for which the one line summary is "Very large shards can slow down search operations and prolong recovery times after failures".