High CPU after updating Filebeat from version 7.12.0 to 8.8.1

Description: After updating filebeat from version 7.12.0 to 8.8.1 we started observing high cpu usage. We disabled the cronjob and deployment resources medata on purpose thinking it will help but it didn't really help. Also we double our cpu limits(12Gi) and filebeat is still heating the limits despite the high cpu limit.

Operating System: Linux x86_64, Flatcar Container Linux distribution, with kernel version 5.15.106.
Filebeat Version: 8.8.1
Steps to Reproduce:
update to version 8.8.1(or any version from 7.17.0)
Expected Behavior: After upgrading Filebeat to version 8.8.1, CPU should remain stable.
Actual Behavior: After upgrading Filebeat to version 8.8.1, CPU usage have increased significantly.
Amount of logs a filebeat process is processing within a minutes: 5670
Additional Information: configuration file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: filebeat
    pipeline-managed: supportive-addons
  name: filebeat-node
  namespace: logging
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      containers:
      - args:
        - -c
        - /etc/filebeat.yml
        - -e
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        image: filebeat:8.7.0
        name: filebeat
        resources:
          limits:
            cpu: 12
            memory: 8Gi
          requests:
            cpu: 4
            memory: 2Gi
        securityContext:
          privileged: true
          runAsUser: 0
        volumeMounts:
        - mountPath: /etc/filebeat.yml
          name: config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /etc/fill_index_fallback_processor.js
          name: config
          readOnly: true
          subPath: fill_index_fallback_processor.js
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /var/log
          name: varlog
          readOnly: true
        - mountPath: /etc/puki-certs
          name: ca-certificates
          readOnly: true
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      nodeSelector:
        node.kubernetes.io/role: node
      priorityClassName: cluster-essential
      serviceAccountName: filebeat-platform
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node.kubernetes.io/role
        value: master
      - key: CriticalAddonsOnly
        operator: Exists
      volumes:
      - configMap:
          defaultMode: 384
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/lib/docker/containers
        name: varlibdockercontainers
      - hostPath:
          path: /var/log
        name: varlog
      - hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
        name: data
      - configMap:
          name: ca-certificates
        name: ca-certificates

filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints.enabled: true
          hints.default_config:
            type: container
            paths:
              - /var/log/containers/*${data.kubernetes.container.id}.log
          add_resource_metadata:
            cronjob: false # disable on purpose to resolve memory leakadge issue. See: https://discuss.elastic.co/t/filebeat-memory-leak-via-filebeat-autodiscover-and-200-000-goroutines/322082
            deployment: false # disable on purpose to resolve memory leakadge issue. See: https://discuss.elastic.co/t/filebeat-memory-leak-via-filebeat-autodiscover-and-200-000-goroutines/322082
            namespace:
              enabled: true
    fields_under_root: true
    fields:
      kubernetes.cluster: cluster1
      kubernetes.stage: stage1
    processors:
      - add_host_metadata:
          netinfo.enabled: false
          when.not.equals.kubernetes.namespace_labels.namespace-type: application
      - drop_fields:
          fields: ['ecs.version', 'kubernetes.namespace_uid']
          ignore_missing: true
          when.not.equals.kubernetes.namespace_labels.namespace-type: application
      - drop_fields:
          fields: ['kubernetes.node.uid', 'kubernetes.pod.ip', '/^kubernetes.node.labels.*/']
          ignore_missing: true
      - copy_fields:
          fields:
            - from: kubernetes.labels.logging_k8s_zone/index-name
              to: index-name
          fail_on_error: false
          ignore_missing: true
          when.not.has_fields: ['index-name']
      - add_fields:
          target: ''
          fields:
            index-name: k8s-logs
          when:
            and:
            - not.has_fields: ['index-name']
            - or:
              - equals.kubernetes.namespace_labels.namespace-type: shared
              - equals.kubernetes.namespace_labels.namespace-type: helper
      - add_fields:
          fields:
            agent.hostname: ${HOSTNAME}
          target: ""
      - copy_fields:
          fields:
            - from: container.image.name
              to: kubernetes.container.image
          fail_on_error: false
          ignore_missing: true
          target: "kubernetes"
      - decode_json_fields:
          fields: ['message']
          overwrite_keys: true
          target: ""
      - copy_fields:
          fields:
            - from: kubernetes.namespace_labels.tenant
              to: tenant
          fail_on_error: false
          ignore_missing: true
          when.not.has_fields: ['tenant']
      - drop_event:
          when.not.has_fields: ['index-name']
    output.logstash:
      hosts:
      - host1
      ssl:
        certificate_authorities:
          - "cert_path"

NB: We tried all the versions from 7.17.0, and the issue is the same

version 7.12.0 is EOL and no longer supported. Please upgrade ASAP.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.