Max heap size in nodes elasticsearch

Hi,
I have a 3-node architecture with the following roles:
"roles" : [
"ingest",
"master",
"data",
"ml"
]
currently each server has 32GB of memory, which means that my nodes have 16GB for elastic. In the last few weeks I have been working with machine learning jobs, and due to the high cardinality of the fields the memory consumption of the jobs is high. My question is about to how much my cluster could grow in memory without affecting performance?

Thanks.

You should know that ML's jobs run outside of the JVM and by default, only have access to 30% of the overall RAM of the machine (see xpack.ml.max_machine_memory_percent - Machine learning settings in Elasticsearch | Elasticsearch Guide [7.13] | Elastic)

Also, older, but still relevant blog on sizing and dedicated ML nodes