Apologies for the delay. Assuming HTTP or TCP, If some hosts are slower than others the main consequence of that will be heartbeat will will keep additional file descriptors allocated for each host. This will require some level of benchmarking on your end if you want to be confident that it will work fine.
t's a tough thing to simulate but it is possible. What I would do is spin up a few VM's on the cloud as test targets and send 20,000 monitors to each. I would just run nginx on the VM's. Then, I would generate a simple heartbeat config with the full list of monitors. To simulate a slow connection you could use qdiscs with linux. For an example see: linux - Simulating a slow connection with tc - Server Fault .
You may also want to set
heartbeat.scheduler.limit (see docs).
ICMP does not require the allocation of file descriptors so it scales much further.
We'd be really curious to see the results of a test like this!