About filebeat upgrade 7.6.2 to 7.10.2

I want to fix bug with filebeat kubernetes auto discover, so upgrade filbeat from 7.6.2 to 7.10.2, but I found some problem, the log events output to kafka speed is very slower at the new version, I do not know how to fix , anyone can help me? thanks.

filebeat run daemonset in kubernetes, and the config yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    logging.level: info
    path.home: "/usr/share/filebeat"
    path.config: "/usr/share/filebeat"
    path.data: "/usr/share/filebeat/data"
    path.logs: "/usr/share/filebeat/logs"
    http.enabled: true
    http.host: localhost
    http.port: 5066

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                has_fields: ["kubernetes.labels.kafkaTopic"]
              config:
                - type: log
                  enabled: true
                  ignore_older: 48h
                  close_timeout: 2h
                  multiline.pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:'
                  multiline.negate: false
                  multiline.match: after
                  paths:
                    - /data/logs/${data.kubernetes.labels.service}-${data.kubernetes.labels.cluster}_${data.kubernetes.namespace}/${data.kubernetes.pod.name}/*/*.log
                - type: log
                  enabled: true
                  ignore_older: 48h
                  close_timeout: 2h
                  symlinks: true
                  json.keys_under_root: false
                  paths:
                    - /var/log/pods/${data.kubernetes.namespace}_${data.kubernetes.pod.name}_${data.kubernetes.pod.uid}/${data.kubernetes.container.name}/*.log         
                  processors:
                    - rename:
                        fields:
                          - from: "json.log"
                            to: "message"
                          - from: "json.stream"
                            to: "stream"
                          - from: "json.time"
                            to: "datetime"
                        ignore_missing: false
                        fail_on_error: false
                    - drop_fields:
                        fields: ["json"]

    processors:
      - if:
          regexp:
            message: "^{.*}"
        then:
          - rename:
              fields:
                - from: "message"
                  to: "message_json_str"
              ignore_missing: true
              fail_on_error: false
          - decode_json_fields:
              fields: ["message_json_str"]
              process_array: true
              max_depth: 5
              target: ""
              overwrite_keys: false
              add_error_key: true
          - drop_fields:
              fields: ["message_json_str"]
      - rename:
          fields:
            - from: "log.file.path"
              to: "log_path"
            - from: "kubernetes.replicaset.name"
              to: "kubernetes.replicaset_name"
            - from: "kubernetes.pod.name"
              to: "kubernetes.pod_name"
            - from: "kubernetes.node.name"
              to: "kubernetes.node_name"
            - from: "host.name"
              to: "fagent"
          ignore_missing: true
          fail_on_error: false
      - drop_fields:
          fields: 
            - "kubernetes.container"
            - "kubernetes.replicaset"
            - "kubernetes.replicaset.name"
            - "kubernetes.pod"
            - "kubernetes.node"
            - "kubernetes.labels.pod-template-hash"
            - "kubernetes.labels.cop"
            - "kubernetes.statefulset.name"
            - "kubernetes.labels.statefulset_kubernetes_io/pod-name"
            - "kubernetes.labels.controller-revision-hash"
            - "kubernetes.namespace"
            - "host.name"
            - "agent"
            - "ecs"
            - "log"
            - "input"
            - "host"
            - "container"

    output.kafka:
      enabled: true
      hosts: '${KAFKA_HOSTS}'
      topic: "%{[kubernetes.labels.kafkaTopic]}"
      partition.round_robin:
        reachable_only: true
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000
      channel_buffer_size: 1024
      keep_alive: 60
      client_id: ${HOSTNAME:beats}
      worker: 3

the monitor output rate graph:
promql: rate(filebeat_libbeat_output_events{type="acked",node=~"$node"}[2m])

:rofl: alone

@faec Could you please take this question?

So if I understand right, you used the exact same config with both versions, right?

It's hard to know exactly what's happening from only a rate change. Is ingestion going slower, or are events being dropped entirely? Do you have any information about which events are missing? (e.g. if only events of a certain type or from a certain file are missing that needs a different fix than if only the output has slowed down). How many filebeat nodes does the graph at the bottom represent? Are all nodes still producing events after the upgrade?