High RAM usage on kubernetes pod after migration from type container to filestream

Hello,
we are in a progress of migration from input: container to filestream.
Filebeat is configured as daemonset on kubernetes and each filebeat pod collects logs from all applications deployed on corresponding kubernetes node. When using input: container each filebeat pod consumes ~100 MB RAM. After switch to filestream RAM usage grows up to 3GB RAM per pod. Is it normal behaviour? I post our config:

filebeat.inputs:
  - type: filestream
    id: "filebeat-${NODE_NAME}"
    prospector.scanner.symlinks: true
    format: cri
    paths:
      - /var/log/containers/*.log
    enabled: true
    parsers:
      - container: ~
    fields:
      logstashSource: "filebeat-k8s--console-json"
    processors:
      - add_kubernetes_metadata:
          host: ${NODE_NAME}
          in_cluster: true
          default_matchers.enabled: false
          matchers:
          - logs_path:
              logs_path: /var/log/containers/
      - drop_event:
          when:
            not.equals.kubernetes.labels.logging-type: "console-json"
      - decode_json_fields:
          fields: ["message"]
          process_array: true
          max_depth: 1
          target: "logData"
      - rename:
          when:
            not.has_fields: ['logData.appName']
          fields:
            - from: "kubernetes.labels.app"
              to: "logData.appName"
      - rename:
          when:
            not.has_fields: ['logData.message']
          fields:
            - from: "message"
              to: "logData.message"
      - rename:
          fields:
            - from: "kubernetes.node.name"
              to: "logData.node"
      - rename:
          fields:
            - from: "fields.logstashSource"
              to: "logData.logstashSource"
      - drop_fields:
          fields: ["stream", "message", "prospector", "offset", "input", "source", "kubernetes", "fields", "log"]

Do you have old logs on /var/log/containers/*?

It probably started reading everything on this path and depending on the amount of logs it can use more resources for a period of time.

Give it some time to see if the resource usage normalize.

Thank you for reply. RAM usage is growing on the startup of pod. After some time (a few minutes) it is starting to lower the usage and finally it stabilizes. However the final usage is above 1 GB RAM which is far more than in container mode.

Any idea why RAM usage is like 10x higher after migration?

Not sure, I do not use k8s.

What version are you using? Some versions had issues with memory leak in some cases.