Logstash-Forwarder has been replaced by Filebeat. The normal flow in this scenario is something like this:
Filebeat ---> Logstash --->Redis ----(DC Boundary)---> Logstash ---> Elasticsearch
If you only have file based inputs, these generally handle stoppages in the pipeline quite well as they can just stop reading until the rest of the processing pipeline clears and then continue. If you only have file inputs and will not risk losing data due to aggressive log rotation, you may be able to do without using Redis and send data directly from Filebeat to the Logstash instance in the remote DC.
Other types of inputs, e.g. inputs based on TCP and/or UDP, are often not able to stop processing without causing problems upstream or losing data. If you have these types of inputs, using a message queue like e.g. Redis for buffering is often recommended.