Filebeat unable obtain kubernetes metadata

I am running filebeat in my kubernetes cluster. I used this helm chart to deploy filebeat, logstash and elasticsearch. This is my filebeat configuration:

filebeatConfig:
  filebeat.yml: |
    filebeat.inputs:
    - type: container
      exclude_files:
        - '.*elk.*'
        - '.*elasticsearch.*'
      paths:
        - /var/log/containers/*.log
      processors:
      - add_kubernetes_metadata:
          host: ${NODE_NAME}
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"
    output.logstash:
      worker: 3
      host: '${NODE_NAME}'
      hosts: 'elk-logstash-logstash:5000'

I dont want to process logs from elasticsearch and elk deployments so I am ignoring them. In logstash I am writeing data into index name: [kubernetes][pod][name]. If pod name missing, I am writing it to stdout. Unfortunatelly, logs from some containers missing this metadata. The container is alive, Container ID is the same as in the name of log file. So for example, I have prometheus logs but none of them has pod name in messages from filebeat. I can see, that logs missing whole kubernetes section. This is my logstash configuration:

logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5000
      }
    }
    filter {
    }
    output {
      if ![kubernetes][pod][name] {
        stdout { codec => rubydebug { metadata => true }}
      } else {
        elasticsearch {
          hosts => ["http://elasticsearch-data:9200"]
          index => "%{[kubernetes][pod][name]}"
        }
      }
    }

I am using helm 3.

UPDATE 1:
What is interesting, from some containers, e.g. grafana container, some logs contains kubernetes metadata, but some not...

Hey @dorinand,

What version of filebeat are you using? Do you see any error in logs related to add_kubernetes_metadata?

I wonder if there may be multiple inputs trying to read the same logs, with different configurations or from different paths. Do you see duplicated logs? Do all the logs you see for the same container have the same log.file.path in stored events?

By the way, have you considered to send the logs directly to Elasticsearch, and use default indexes? Is there any reason why you are using Logstash, and one index per pod?