Can we give more than 32GB Memory to dedicated Machine learning Node?

The recommendation is that the JVM heap size for Elasticsearch should be less than 32GB. This is so that the JVM can use compressed pointers. A JVM heap of 33GB will use space more wastefully than a JVM heap of 31GB.

The JVM heap size is different to the size of the machine/VM/container where the software is running. It's certainly possible and desirable at large scale for that to be bigger than 32GB.

So this question comes down to what you mean by "node". The Elastic docs are a bit confusing in this regard. Sometimes "node" is used to mean a JVM running Elasticsearch and sometimes "node" is used to mean the whole machine/VM/container that it's running on.

If you mean JVM heap size then no, you shouldn't give an ML JVM heap more than 32GB. In fact, ML nodes use most memory outside the JVM so the JVM on an ML node should be smaller than on a data node.

But you'll be able to run more native ML processes with more memory, so you can certainly have machines/VMs/containers for ML that are bigger than 32GB.