Runaway memory usage

I am deploying Elastic 7.4.0 in my cluster, specifying -Xms4096M -Xmx4096M for the nodes, but I continue to run into pods getting evicted for memory pressure. Examples:

The node was low on resource: memory. Container elasticsearch was using 36874144Ki, which exceeds its request of 8Gi.
The node was low on resource: memory. Container elasticsearch was using 49884448Ki, which exceeds its request of 8Gi.

The top output from one of the nodes shows this rather impressive virtual address space usage:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
63500 1000 20 0 97.810g 4.584g 170780 S 46.0 8.3 912:46.37 java

Anyone know why java is bloating so badly, and what I can do to stop it?

@JoeyLemur Are you setting memory limits as well? As described here:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html

I'm not... if I remember correctly, last time I did set limits, the pods died off because it was trying to use more memory than it should... the problem here is that java is bloating horribly, and its causing the node itself to run out of available memory.

How large is your data? The large virtual memory size isn't too surprising since Elasticsearch will mmap index data by default. The resident size looks close to the heap size you configured as well.

We do recommend setting requests == limits in your podTemplates to keep the nodes from running out of memory and to allow ES to run with a Guaranteed QoS.

The indices vary between 50-150gb (6 shards * 2 replicas in hot, 3 shards * 2 replicas and shrunk in warm).

Elastic is the only thing running on this cluster. There are currently 3 master-only instances, 3 ingest-only instances, and 6 data-only instances, and they have PodAntiAffinity set for their respective types, to try and spread things out across the nodes (and to make sure losing a node isn't going to kill the whole Elastic cluster)