The article says
In fact, it takes until around 40–50 GB of allocated heap before you have the same effective memory of a heap just under 32 GB using compressed oops
But since we have machines with 120GB ram, we can afford to give around 60-70 GB of ram to the heap
Also, if we just allocate 30 GB to heap, would elasticsearch would make good use of the remaining 90 GB ram?
Just want to make sure, we don't under/over allocate heap memory for our production machines
The point of that blog post is being missed entirely. The answer to how much heap should you have is independent of the physical amount of memory on the system. The amount of physical memory on the system only serves as a constraint. The main input into how much heap should you have is dependent on your workloads. For this you have to do some measuring and tuning that only you can do.
And yes, whatever is leftover for the filesystem cache will be utilized by Elasticsearch.