So sounds that you are speaking of the new frozen tier which is backed by searchable snapshots.
But feel free to read this just to clear up any potential confusion.
So assuming you are speaking about the new frozen tier The guidance about The ratio of 20 shards per 1GB JVM memory is not meant to be applied to the frozen tier that is backed by searchable snapshots.
So you do not size your frozen tier node based on the number of shards.
A good way to think of Frozen tier node Is to look at what we do in Elastic cloud.. HW Profiles
We use is node with 8 vCPU 60 GB RAM and 4.8TB SSD, which supports up to about 100TB of searchable snapshots...(on AWS this is an i3en)
Of course they can always be use case specific considerations, but that's what we do in Elastic Cloud... If you have normal shard sizes that say between 10 and 50 GB per shard You should be in decent starting point.
Yes, that blog showed an extreme case of a petabyte with 12,500 shards hanging off a single node... I would interpret that as an upper bound... But it's pretty cool that it works that way.
So you could create a node described above and you can start adding S3 searchable snapshots starting with 100TB or so and add to it until the performance flattens out or does not meet your requirements.