Filebeat pod continuously restarting : Liveness probe failed: rss_mem 341245952

filebeat pod in a k8s worker node is continuously restarting with following error message when the pods in the worker node increases:

Error Message from filebeat pod:
Warning Unhealthy 44m kubelet, k8s-worker-2 Liveness probe failed: rss_mem 341245952

1)
I am using following images in filebeat daemon set:

docker.elastic.co/beats/filebeat-oss:7.9.0
trustpilot/beat-exporter:0.1.1

2)
I have tried following workarounds but could not resolve the issue
a)

b)
Increased memory limit of filebeat container in filebeat daemon set

c)
upgraded filebeat-oss docker image to 7.10.2

3)
Filbeat memory and cpu allocation which we are using

    resources:
      limits:
        cpu: 100m
        memory: 512Mi
      requests:
        cpu: 100m
        memory: 512Mi

4)
update filebeat livnessprobe

I can see following livnessprobe in filebeat daemon set, but latest filebeat doesn't use rss memory check in livnessprobe , Can I remove this rss memory check for filebeat oss 7.9.0 ?

    livenessProbe:
      exec:
        command:
        - /bin/bash
        - -c
        - var=($(curl -s http://localhost:5066/stats?pretty | grep 'rss\|open_files'
          | sed 's/[^0-9]*//g')); echo -e "**rss_mem** ${var[0]} \nopen_files ${var[1]}";
          if [ "${var[0]}" -gt "**335544320**" ] || [ "${var[1]}" -le "1"  ]; then
          exit 1; else exit 0; fi

5)
Update filebeat.yml

Should I make any configuration change in filebeat.yml file to fix this issue ? We are using default values.
for example :

bulk_max_size

any clue ?

can I remove RSS memory check -gt "335544320" from livenessProbe ? Is it safe ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.