How to avoid pod use all memory limits

Elastic search always use all memory that I specify as limit to it, no mater how much I increase the limit and downgrade the mix/max heap size. The actual heap is using less than half of the max heap (2GB). The pod keep firing alarm that the memory is closed to the limit. It´s seems that this is by designed. How do configure Elasticsearch to no consume all available pod memory (requested) but no the limit.

        spec:
          containers:
            - name: elasticsearch
              env:
                - name: ES_JAVA_OPTS
                  value: -Xms2g -Xmx2g
              resources:
                requests:
                  memory: 2Gi
                  cpu: 0.5
                limits:
                  memory: 6Gi
                  cpu: 2

Hi @levitative, thanks for your question.

Elasticsearch process should not consume more memory than indicated by max heap size. How are you checking the memory usage?
Also, note that it's recommended to set your requests equal to your limits (for QoS "Guaranteed") and both min/max heap to half of available memory. You can see the docs for more details.

Hello dkow, thanks for the response.

I changed the request memory to 5GB, but did not worked. I'm using Prometheus container_memory_usage_bytes.

How do configure Elasticsearch to no consume all available pod memory (requested) but no the limit.

To do that you would want to lower the limit. The OS will use the memory you allow it to cache data. For monitoring Elasticsearch it is more useful to look at the JVM heap usage rather than the pod memory used. The ES docs have additional info on memory usage: Heap size settings | Elasticsearch Guide [8.11] | Elastic