Segments memory_in_bytes excessively large with allot of open indices

Can't verify using the API 'cause we're using 1.7. Unless ofcourse the *.tim files on disk are fully loaded in the heap, then I could make a calculation.
Would you mind verifying that on your cluster (du *.tim == terms_memory_in_bytes)?

Fields: anywhere between 20 and 60.
Docs in index: anywhere between a few and 250 million

These stats are fairly broad due to us splitting various types of logging into separate indices (for various reasons).

From How to decrease terms_memory footprint

As an option I can open and close older indexes at query time (open -> search -> close), but IMHO it's time/resource consuming decision

Wouldn't be an option at all for us, considering some indices being 100+ GB.