Add_kubernetes_metadata does not provide K8S namespace or pod uid

Hi there!
I upgraded from ELK 6.6.2 to 7.2.0.
Before the upgrade, everything worked fine, after the upgrade, I can't receive k8s logs anymore.

I configured and updated everything according to documentation.

The yaml files are - DaemonSet

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: filebeat
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - args:
        - -c
        - /etc/filebeat.yml
        - -e
        env:
        - name: LOGSTASH_HOSTS
          value: logstash-kube:5044
        image: docker.elastic.co/beats/filebeat:7.2.0
        imagePullPolicy: IfNotPresent
        name: filebeat
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          procMount: Default
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/filebeat.yml
          name: config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/inputs.d
          name: inputs
          readOnly: true
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /var/log
          name: varlog
          readOnly: true
        - mountPath: /var/data
          name: vardata
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: filebeat
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 384
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/data
          type: ""
        name: vardata
      - hostPath:
          path: /var/log/pods
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 384
          name: filebeat-inputs
        name: inputs
      - emptyDir: {}
        name: data

Conf file -

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload prospectors configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    output.logstash:
      hosts: ${LOGSTASH_HOSTS:?No logstash host configured. Use env var LOGSTASH_HOSTS to set hosts.}

input-confg file -

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: container
      paths:
        - '/var/lib/docker/containers/*/*/*.log'
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
      scan_frequency: 10s
      close_inactive: 1m

Therefore, my Logstash config, does not provide any logs now (here's the config)

input {
    beats {
        port => 5044
    }
}
filter {
    if  [kubernetes][container][name] == "nginx-ingress" {

        json {
            source => "message"
            remove_field => "message"
          }

    }

    else if  [kubernetes][container][name] == "nginx" {
       grok {
           match => {
               "message" => "%{IP:remote_ip} - \[%{HTTPDATE:[response][time]}\] \"%{DATA:url}\" %{NUMBER:[response][code]} %{NUMBER:[response][bytes]} %{QS:user_agent}"
           }
           remove_field => "message"


       }



       geoip {
           source => "remote_ip"
           target => "[geoip]"
       }


   }

   else {
        drop {}
    }

    date {
        match => ["time", "ISO8601"]
        remove_field => ["time"]
    }

    mutate {
        remove_field => ["source", "host", "[beat][name]", "[beat][version]"]
    }
}

output {
        elasticsearch {
            hosts => ["http://Kube-Apps-XXXXX-dal10.lb.bluemix.net:9200"]
            index => "apps-prod-dal13-%{[kubernetes][namespace]}-deployment-%{[kubernetes][container][name]}-%{[kubernetes][replicaset][name]}%{+YYYY.MM.dd}"

    }
}

I see that in logs that Filbeat provides, add_kubernetes_metadata does not add any k8s namespace, pod uid and etc.
Why?
What am I doing wrong here with the new ver?

Thanks,
Aleksei

Can anyone help please?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.