Filebeat is not sending logs from all the pods of same application

Hi,
We are running filebeat as a deamonset in kubernetes environment. where we are directly sending logs to elasticsearch.
Lets say we have application (app1) running as kubernetes pods, normally if we are running single pod in a worker node (worker1) than file beat is sending all the logs to elastic, but when we schedule another app1 pod in the same worker1 node than new pod logs are not going to Elasticsearch. We are kind of stuck and not able to get the correct solution from last few days. If any one can guide us than it would be helpful. Thank you..
Below are tech stack
Filebeat : 8.16.1
Elastic : 8.16
Kibana : 8.16

Filebeat.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: monitoring
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
   filebeat.autodiscover:
        providers:
          - type: kubernetes
            node: ${NODE_NAME}
            hints.enabled: true
            templates:
              - condition:
                   equals:
                      has_fields: ["kubernetes.namespace", "kubernetes.pod.name", "kubernetes.container.name", "kubernetes.labels.app"]
                config:
                  - type: log
                    enabled: true
                    paths:
                      - /var/log/containers/*.log
                    harvester_limit: 0
                    symlinks: true
                    fields:
                      log_source: "containers"
                    multiline.pattern: '^[^{]|(?i)^\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\]|\d{4}-\d{2}-\d{2}|^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+[+-]\d{2}:\d{2}|\b(ERROR|WARN|INFO|DEBUG|TRACE)\b' 
                      #multiline.pattern: '^[[:digit:]]'
                    multiline.negate: true
                    multiline.match: after
                    multiline.max_lines: 1500
                    scan_frequency: 5s
                    close_renamed: true
                    ignore_older: 0
                    close_inactive: 3h
                    clean_inactive: 7h
                    exclude_files: ['.gz$']
   processors:
     - add_kubernetes_metadata:
        in_cluster: true
     - drop_fields:
         fields: ["host.name", "agent.ephemeral_id", "agent", "ecs", "input.type"]
     - drop_event:
            when:
              or:
                - equals:
                    kubernetes.namespace: "monitoring"
                - equals:
                    kubernetes.namespace: "kube-system"
                - equals:
                    kubernetes.namespace: "kube-logging"
                - equals:
                    kubernetes.namespace: "metallb-system"
                - equals:
                    kubernetes.namespace: "kube-public"
   setup.template.name: "rhos-filebeat-template"
   setup.template.pattern: "rhos-filebeat-*"
   setup.template.enabled: true

   queue.mem:
    events: 12373
      #flush.min_events: 1024
      #flush.timeout: 1s

   filebeat.registry:
     path: /usr/share/filebeat/data/registry 
     flush: 1s

   output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      loadbalance: true
      #username: ${ELASTICSEARCH_USERNAME}
      #password: ${ELASTICSEARCH_PASSWORD}
      index: "rhos-filebeat-%{[kubernetes.namespace]}-%{[kubernetes.labels.app]}-%{+yyyy.MM.dd}"
      bulk_max_size: 4096
      worker: 8