Excessive Garbage collection when I try to vertically scale the pods

Hello there,

I've recently gotten the cluster up and running via ECK. in order to increase performance, I have allocated 3 nodes, each on an n1-highmem-2 instance. I have also increased the memory limit. I am now seeing a ton of garbage collection logs in ES. Here is my elastic service yaml:

apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.1.0
  nodes:
  - nodeCount: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
    podTemplate:
      spec:
        containers:
        - name: elasticsearch
          resources:
            limits:
              memory: "6Gi"
              cpu: "100m"
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: standard

My logs are absolutely filled with [gc][10417] overhead, spent [503ms] collecting in the last [1.1s] and it is making performance worse than on smaller clusters. Is there some way I'm supposed to increase memory limitations that isn't through the pod template?

Cheers and thanks for the assistance.

Hi @tadgh,

The way you set the memory limits looks correct. ECK will allow half of your 6Gi to the JVM heap size by default in 0.8 (that behaviour will change in 0.9).

I suggest you also increase the cpu limit: 100m is very low, it means 0.1 cpus. Elasticsearch can work with much more than that (eg. 16 cpus!).

Once I increased the CPU, this issue disappeared