Filebeat not sending metadata to Graylog even I included add_kubernetes_metadata

Hi
I have Filebeat setup to pull logs from Kubernetes which is not working.
Little background: I have Graylog running with mongo and elastic search on a single server. I have a beats plugin "graylog-plugin-beats-2.4.7" on Graylog to read beats logs. My team's APIs are running in Kubernetes and we are trying to pull the logs using Filebeat. I am getting the log messages but metadata was missing.
Any help is appreciated.
The following is my config:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.config:
      prospectors:
        enabled: true
        # Mounted `filebeat-prospectors` configmap:
        path: ${path.config}/prospectors.d/*.yml
        # Reload prospectors configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false



    output.logstash:
      enabled: true
      hosts: ['mydns:5044']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: docker
      containers:
        path: "/var/lib/docker/containers"
      containers.ids:
      - "*"
      json.keys_under_root: true
      json.add_error_key: false
      json.message_key: log
      json.ignore_decoding_error: true
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after

      processors:
        - add_kubernetes_metadata:
            in_cluster: true
        - add_docker_metadata: ~
        - drop_event.when.regexp:
            or:
              - kubernetes.pod.name: "external-dns.*"
              - kubernetes.pod.name: "filebeat*"
              
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.6.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: mydns
        - name: ELASTICSEARCH_PORT
          value: "5044"
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: prospectors
          mountPath: /usr/share/filebeat/prospectors.d
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlogcontainers
          mountPath: /var/log/containers
          readOnly: true
        - name: varlogpods
          mountPath: /var/log/pods
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlogcontainers
        hostPath:
          path: /var/log/containers
      - name: varlogpods
        hostPath:
          path: /var/log/pods
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: prospectors
        configMap:
          defaultMode: 0600
          name: filebeat-prospectors
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

Looking at your configuration I see the processors and they should get the requested nformation. Let's try to go by elimination, can you try with the Filebeat file output and see if the data is there?

Thank you for the reply. Meanwhile I have installed helm chart stable version on my Dev environment and was looking for metadata but I get the data but not the metadata.

After your reply as I have already installed helm I have changed the output to a file(instead of Logstash) and I can see the metadata. Below is the output:

{
  "@timestamp": "2019-06-19T20:02:40.200Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.5.4"
  },
  "stream": "stdout",
  "message": "10.110.108.4 - - [19/Jun/2019:20:02:40 +0000] \"GET / HTTP/1.1\" 200 798 \"-\" \"kube-probe/1.12\" \"-\"\n10.110.108.4 - - [19/Jun/2019:20:02:46 +0000] \"GET / HTTP/1.1\" 200 798 \"-\" \"kube-probe/1.12\" \"-\"",
  "offset": 45248241,
  "input": {
    "type": "docker"
  },
  "kubernetes": {
    "pod": {
      "name": "its-1558-web-app-74579bdfd5-jb4z6"
    },
    "node": {
      "name": "aks-nodepool1-xxxxxxxx-0"
    },
    "container": {
      "name": "web-app"
    },
    "namespace": "its-1558",
    "replicaset": {
      "name": "its-1558-web-app-74579bdfd5"
    },
    "labels": {
      "app": {
        "kubernetes": {
          "io/instance": "its-1558",
          "io/name": "web-app"
        }
      },
      "pod-template-hash": "74579bdfd5"
    }
  },
  "beat": {
    "name": "dev-filebeat-helm-ctgkq",
    "hostname": "dev-filebeat-helm-ctgkq",
    "version": "6.5.4"
  },
  "meta": {
    "cloud": {
      "instance_id": "xxxxxxxxxx",
      "instance_name": "aks-nodepool1-xxxxxxx-0",
      "machine_type": "Standard_D8_v3",
      "region": "eastus2",
      "provider": "az"
    }
  },
  "log": {
    "flags": [
      "multiline"
    ]
  },
  "source": "/var/lib/docker/containers/2c1f93889054bc7c755f2fcae2c517fe391b0375150ec80640d731c70b5636a3/2c1f93889054bc7c755f2fcae2c517fe391b0375150ec80640d731c70b5636a3-json.log",
  "prospector": {
    "type": "docker"
  },
  "host": {
    "name": "dev-filebeat-helm-ctgkq"
  }
}

After installing helm from https://github.com/helm/charts/tree/master/stable/filebeat, I have changed only the values.yaml as below:

image:
  repository: docker.elastic.co/beats/filebeat-oss
  tag: 6.5.4
  pullPolicy: IfNotPresent

config:
  filebeat.config:
    prospectors:
      # Mounted `filebeat-prospectors` configmap:
      path: ${path.config}/prospectors.d/*.yml
      # Reload prospectors configs as they change:
      reload.enabled: false
    modules:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

  processors:
    - add_cloud_metadata:        
    
  filebeat.prospectors:
   # - type: log
    #  enabled: true
     # paths:
      #  - /var/lib/docker/containers
        
        
    - type: docker
      enabled: true
      combine_partial: true
      containers.ids:
      - "*"
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
         
        - drop_event:
            when:
              equals:
                kubernetes.container.name: "dev-filebeat*"
                kubernetes.container.name: "ct-external-dns*"
                kubernetes.container.name: "azure-cni-networkmonitor*"
                kubernetes.container.name: "tiller*"
                kubernetes.container.name: "tunnelfront*"
                kubernetes.container.name: "kube-svc*"
                kubernetes.container.name: "pds-keyvault*"
                
  output.file:
    enabled: true
    path: "/usr/share/filebeat/data"
    filename: filebeat
    rotate_every_kb: 10000
    number_of_files: 5

  
  output.logstash:
    enabled: false
    hosts: ["mydns:5044"]  

  # When a key contains a period, use this format for setting values on the command line:
  # --set config."http\.enabled"=true
  http.enabled: false
  http.port: 5066

# Upload index template to Elasticsearch if Logstash output is enabled
# https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-template.html
# List of Elasticsearch hosts
indexTemplateLoad: []
  # - elasticsearch:9200

# List of beat plugins
plugins: []
  # - kinesis.so

# pass custom command. This is equivalent of Entrypoint in docker
command: []

# pass custom args. This is equivalent of Cmd in docker
args: []

# A list of additional environment variables
extraVars: []
  # - name: TEST1
  #   value: TEST2
  # - name: TEST3
  #   valueFrom:
  #     configMapKeyRef:
  #       name: configmap
  #       key: config.key

# Add additional volumes and mounts, for example to read other log files on the host
extraVolumes: []
  # - hostPath:
  #     path: /var/log
  #   name: varlog
extraVolumeMounts: []
  # - name: varlog
  #   mountPath: /host/var/log
  #   readOnly: true

extraInitContainers: []
  # - name: echo
  #   image: busybox
  #   imagePullPolicy: Always
  #   args:
  #     - echo
  #     - hello

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 200Mi
  # requests:
  #  cpu: 100m
  #  memory: 100Mi

priorityClassName: ""

nodeSelector: {}

annotations: {}

tolerations: []
  # - operator: Exists

affinity: {}

rbac:
  # Specifies whether RBAC resources should be created
  create: true

serviceAccount:
  create: true
 template
  name: filebeat

podSecurityPolicy:
  enabled: False
  annotations: {}

I have noticed that Filebeat was not sending the appllication logs but it is sending some networking logs and monitoring logs to Graylog.

My application logs are below. It was one of our API container logs.

2019-06-19 13:35:15.948 [http-nio-8080-exec-1] [] [] INFO  o.a.c.c.C.[.[.[/] - Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-06-19 13:35:15.948 [http-nio-8080-exec-1] [] [] INFO  o.s.w.s.DispatcherServlet - Initializing Servlet 'dispatcherServlet'
2019-06-19 13:35:15.961 [http-nio-8080-exec-1] [] [] INFO  o.s.w.s.DispatcherServlet - Completed initialization in 12 ms
2019-06-19 14:13:35.227 [cluster-ClusterId{value='5d0a3a086dd9c60001dac7d0', description='null'}-mongo:27017] [] [] INFO  o.m.d.cluster - Exception in monitor thread while connecting to server mongo:27017
com.mongodb.MongoSocketException: mongo: Name or service not known
	at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:188)
	at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64)
	at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:62)
	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126)
	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:131)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.net.UnknownHostException: mongo: Name or service not known
	at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
	at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
	at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515)
	at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
	at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
	at java.base/java.net.InetAddress.getByName(InetAddress.java:1248)
	at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:186)
	... 5 common frames omitted
2019-06-19 14:14:05.245 [cluster-ClusterId{value='5d0a3a086dd9c60001dac7d0', description='null'}-mongo:27017] [] [] INFO  o.m.d.connection - Opened connection [connectionId{localValue:6, serverValue:1}] to mongo:27017

@pierhugues I have also got the output for the prospector configuration (which I have posted initially) and noticed that it also has the metadata as below. Please suggest further steps. Thank you for trying to resolve this issue.

One more thing I have noticed was Filebeat was not reading all the container logs. That was the reason why I did not find the application logs.

{
  "@timestamp": "2019-06-23T21:52:36.366Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.6.1"
  },
  "stream": "stdout",
  "message": "10.110.108.35 - - [23/Jun/2019:21:52:36 +0000] \"GET / HTTP/1.1\" 200 798 \"-\" \"kube-probe/1.12\" \"-\"\n10.110.108.35 - - [23/Jun/2019:21:52:40 +0000] \"GET / HTTP/1.1\" 200 798 \"-\" \"kube-probe/1.12\" \"-\"",
  "input": {
    "type": "docker"
  },
  "prospector": {
    "type": "docker"
  },
  "kubernetes": {
    "node": {
      "name": "aks-nodepool1-xxxxxxxx-2"
    },
    "container": {
      "name": "web-app"
    },
    "namespace": "pdswebapp-1051",
    "replicaset": {
      "name": "pdswebapp-1051-web-app-58fbd89f9"
    },
    "labels": {
      "app": {
        "kubernetes": {
          "io/instance": "pdswebapp-1051",
          "io/name": "web-app"
        }
      },
      "pod-template-hash": "58fbd89f9"
    },
    "pod": {
      "uid": "xxxxxxxxxxxxxxxxxxx",
      "name": "pdswebapp-1051-web-app-58fbd89f9-lsvxx"
    }
  },
  "source": "/var/lib/docker/containers/8c57a6529d2c0ed2b4ed97c9a23a606ea331084c236c171426e04ef8dfb5bf35/8c57a6529d2c0ed2b4ed97c9a23a606ea331084c236c171426e04ef8dfb5bf35-json.log",
  "offset": 39475559,
  "log": {
    "file": {
      "path": "/var/lib/docker/containers/8c57a6529d2c0ed2b4ed97c9a23a606ea331084c236c171426e04ef8dfb5bf35/8c57a6529d2c0ed2b4ed97c9a23a606ea331084c236c171426e04ef8dfb5bf35-json.log"
    },
    "flags": [
      "multiline"
    ]
  },
  "beat": {
    "name": "filebeat-4r9ct",
    "hostname": "filebeat-4r9ct",
    "version": "6.6.1"
  },
  "docker": {
    "container": {
      "id": "8c57a6529d2c0ed2b4ed97c9a23a606ea331084c236c171426e04ef8dfb5bf35",
      "labels": {
        "annotation": {
          "io": {
            "kubernetes": {
              "container": {
                "terminationMessagePath": "/dev/termination-log",
                "ports": "[{\"name\":\"http\",\"containerPort\":80,\"protocol\":\"TCP\"}]",
                "terminationMessagePolicy": "File",
                "hash": "d3593af9",
                "restartCount": "0"
              },
              "pod": {
                "terminationGracePeriod": "30"
              }
            }
          }
        },
        "io": {
          "kubernetes": {
            "pod": {
              "name": "pdswebapp-1051-web-app-58fbd89f9-lsvxx",
              "namespace": "pdswebapp-1051",
              "uid": "aca67699-8bcc-11e9-a11d-9e55dd348746"
            },
            "container": {
              "name": "web-app",
              "logpath": "/var/log/pods/aca67699-8bcc-11e9-a11d-9e55dd348746/web-app/0.log"
            },
            "sandbox": {
              "id": "xxxxxxxxxxxxxxxxxxxxxxxxx"
            },
            "docker": {
              "type": "container"
            }
          }
        },
        "maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"
      },
      "image": "quay.io/controltec/devspace-webapp@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
      "name": "k8s_web-app_pdswebapp-1051-web-app-58fbd89f9-lsvxx_pdswebapp-1051_aca67699-8bcc-11e9-a11d-9e55dd348746_0"
    }
  },
  "host": {
    "name": "filebeat-4r9ct"
  },
  "json": {}
}

I reached out to Graylog community as the metadata was there in the Filebeat output. They suggested to upgrade to 3.0 plus the new Beats input to have this data.