Do you have the logstash configurations from the other team?
Also, I think you may test this without beats to see if there is any improvement.
You have a rsyslog server receiving logs and writing to files, than you have filebeat reading those files and sending to a Logstash server, you could just configure your rsyslog server to also redirect the files to that Logstash server using TCP or UDP and listening on TCP or UDP on the other side.
But as I said, it is very hard to troubleshoot this without information about the Logstash side.
As @stephenb said, normally you would have multiple filebeats sending to a centralized Logstash, what you have is a rsyslog receiving logs from multiple sources and writing to multiple files, then just one filebeat sending to a centralized Logstash.
But, is this the issue? Maybe yes, maybe not, it is not possible to know until we have some information about how the Logstash on the other side is configured, for example, if the Logstash is using persistent queues, then the bottleneck could be the disk speed of the Logstash machine, the behaviour you shared that it lags for a couple of hours and then solve itself is consistent with some issues I had when using persistent queues.
Also, 1.3TB/day for just one Filebeat may or may not be too much, it all depends on how your log looks like and if this volume is written through the day or in burst.
I still think that the issue may be on the receiving side.