Based on your observations it seems like your general cumulative/total ingest rate is around ~30-40k events/second. This sounds more like a limitation on what Elasticsearch is able to process rather than a limitation on the Filebeat end.
I have set the JVM heap size of my ES to 64GB, where the server has 755GB of RAM
64GB is too high, you should be around 26-30GB Heap to allow for Compressed objects for ideal performance.
See: Advanced configuration | Elasticsearch Guide [8.8] | Elastic
Set
Xms
andXmx
to no more than the threshold for compressed ordinary object pointers (oops). The exact threshold varies but 26GB is safe on most systems and can be as large as 30GB on some systems.
Which version of Elasticsearch and Filebeat are you using?
With the amount of resources you have on that system, there is probably far more RAM than can be used by a single Elasticsearch instance. Depending on the storage available, you might want to run multiple Elasticsearch instances on that server and then give each one dedicated storage. Then increase the shard count of the backing index to spread the load.
(In my opinion, 30-40k events/second is pretty OK for a single instance of Elasticsearch to be processing).