How to read a huge file, tens of GB

I am capturing a huge file from Suricata, it is incrementing at multi MB per minute.
I just can't read it fast enough with a single server, is it possible to process a single file with more then one server?
I have SAN that I can use for the shared file access.

Interleaved reading of a single file from multiple hosts is hard. Can you split the file? If not, have a single process read the file and push each raw line to a broker from which multiple Logstash processes can read and perform further processing.