How to regulate filebeat memory usages

I have filebeat deployed in kubernetes with following configuration

filebeat.yml
    filebeat.config:
      inputs:
        enabled: true
        path: inputs.d/*.yml
        reload.enabled: true
        reload.period: 10s
      modules:
        enabled: true
        path: modules.d/*.yml
        reload.enabled: true
        reload.period: 10s
    filebeat.autodiscover:
     providers:
       - type: kubernetes
         hints.enabled: true
    processors:
      - add_cloud_metadata:
          cloud.id: ${ELASTIC_CLOUD_ID}
          cloud.auth: ${ELASTIC_CLOUD_AUTH}
    output.elasticsearch:
      enabled: true
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      protocol: "http"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      max_retries: 3
      bulk_max_size: 50
      backoff.init: 1s
      backoff.max: 60s
      timeout: 90
    setup.ilm.enabled: auto
    setup.ilm.rollover_alias: 'filebeat-%{[agent.version]}'
    setup.ilm.pattern: "{now/d}-000001"
    setup.ilm.policy_name: "filebeat-rollover-7-days"
    setup.ilm.check_exists: true
    setup.ilm.overwrite: true
    monitoring.enabled: true
    logging.level: warning
    logging.metrics.enabled: true
    logging.metrics.period: 30s
    logging.to_files: false

and Deployment Manifest

DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-logging
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  minReadySeconds: 12
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.12.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 200m
            memory: 1800Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: policy
          mountPath: /usr/share/filebeat/policy
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
      - name: policy
        configMap:
          defaultMode: 0600
          name: filebeat-policy
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

I noticed these instances are getting restarted

NAME             READY   STATUS    RESTARTS   AGE
filebeat-6885t   1/1     Running   5          16h
filebeat-bt6dk   1/1     Running   2          16h
filebeat-jqnbp   1/1     Running   4          16h
filebeat-s5pxh   1/1     Running   3          16h

Here is the monitoring graph of filebeat-bt6dk

Instance filebeat-bt6dk was killed twice, when it reached memory utilization of 2.3GB.
In Deployment Manifest this is the limit I have set, I think it's generous.

        resources:
          limits:
            cpu: 200m
            memory: 1800Mi
          requests:
            cpu: 100m
            memory: 100Mi

How can I prevent Kubernetes killing filebeat pods for over usages of memory ?

I think that this article might be interested for you: Configure the internal queue | Filebeat Reference [master] | Elastic