About setting of heap size in elastic search

As per the document https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html#heap-size , It is recommended to set Xmx and Xms for heap size to not more than 50% of available memory. But if we have segregated elastic search roles like master only, data only and ingest nodes , Would the JVM needs change?
This 50% rule needs to be kept for all the roles?
And what issues do we face in case we donot follow this recommendation?

Yes.

If your heap size exceeds 50% of your available memory then the node may use more than 100% of the available memory, which is obviously not possible so usually results in the node being killed.

Hello!

Very new to ELk stack, Im running elk in a docker container on Ubuntu. I keep getting the following just loading visualizations:

"[esaggs] > Request to Elasticsearch failed: {"statusCode":429,"error":"Too Many Requests","message":"[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [248561544/237mb], which is larger than the limit of [246546432/235.1mb], real usage: [248561544/237mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=14976/14.6kb, in_flight_requests=0/0b, accounting=6377785/6mb], with { bytes_wanted=248561544 & bytes_limit=246546432 & durability="PERMANENT" }"}"

from what I searched, the heap size is the culprit. I have adjust to 4G on both Xmx and Xms and keep getting the error. Also that value of 235.1 never changes?

Any ideas?