Filebeat 8.X slow "memory leak" from kubernetes watcher

Hi,
After upgrading from filebeat 7 we've observed memory usage of certain daemons continuosly increasing until around 90% of the defined limits, and sometimes OOMing. After every restart the memory usage goes down to normal levels but then starts increasing again. The daemons most affected by this are from specific clusters and nodes that host ephemeral workloads, with pods being constantly created and deleted.

(I tried adding a screenshot from a dashboard but was blocked from having multiple media attachments for being a new user)

We've turned on the profiling server and collected some memory heaps from pods during startup and after a few days running, it seems to point to the kubernetes watcher process as the main culprit but we don't have enough context to understand why. Can you help us figure out if there's anything we can change in our config to avoid this?

We are currently using version 8.13.4

Here's our current config with some omitted values:

http:
  enabled: true
  pprof:
    enabled: true
# logging.level: debug
# Wait for filebeat to finish sending messages to producers when shutting down (otherwise last messages are lost when the node is shutdown)
filebeat.shutdown_timeout: 15s
filebeat.inputs:
- type: filestream
  id: crash-logs
  paths:
    - /var/crash/*/dmesg.*
  fields:
    token: <some token>
  fields_under_root: true
filebeat.autodiscover:
  providers:
    - type: kubernetes
      hints.enabled: true
      include_creator_metadata: false
      add_resource_metadata:
        namespace:
          include_annotations: [""] # we include 5 annotations from the namespace to help routing logs to subaccounts
      hints.default_config:
        type: container
        paths:
          - /var/log/containers/*-${data.kubernetes.container.id}.log
        fields:
          token: <some-token>
      templates:
        - condition:
            # some condition
          config:
            # config to route logs to different accounts based on namespace

      appenders:
        - condition:
            equals:
              <: <some account>
          config:
            fields:
            token: <some account token>
      # there are other 20+ accounts here

# Add Kubernetes Metadata
processors:
  # we have a bunch of processors for:
  # adding metadata based on container labels
  # droping events based on container name
  # renaming kubernetes labels to root fields
  # droping fields from events

filebeat.registry.path: /usr/share/filebeat/data/registry

output.file:
  enabled: false

output.logstash:
  hosts: "<logstash host>"
  ssl:
    # ssl config
  bulk_max_size: 512
  slow_start: true

setup.template:
  name: 'general'
  pattern: '*'
  enabled: false