We are deploying an Elasticsearch 7.9.3 cluster in a production environment. About setting or not the Java heap memory assignment, we find in the documentation something that seems a little contradictory.
According to this, is "important to configure heap size" because "by default, Elasticsearch tells the JVM to use a heap with a minimum and maximum size of 1 GB".
However, this another entry explain that Elasticsearch "automatically sets the JVM heap size based on a node’s roles and total memory", recommending "the default sizing for most production environments".
When deploying the cluster with not heap memory parameters, we see the Elasticsearch process with the options "MaxHeapSize" and "MinHeapSize" with a value of 1GB.
Therefore, ¿we should explicitly configure the heap size parameters for a higher value?
Otherwise, ¿the heap will have always the 1GB size?
Thanks in advance.