[solved] Kubernetes – autodiscover and add_kubermetes_metadata only working partially

Hi!
Basically I'm trying to ship all logs from my k8s cluster, while adding kubernetes metadata to all containers and joining some multiline output on some of them.

I only get the kubernetes: {}-data on some log lines, most do not have them.

Here's my deployment yaml, including the filebeat-config:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.inputs:
      - type: docker
        symlinks: true
        containers:
          path: "/var/lib/docker/containers"
          ids:
            - '*'
        fields:
          type: kubernetes_log
          cluster: dev
        processors:
          - add_kubernetes_metadata:
              in_cluster: true
              namespace: ${POD_NAMESPACE}

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                or:
                  - equals:
                      kubernetes.container.name: "some-container"
                  - equals:
                      kubernetes.container.name: "some-other-container"
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  multiline.pattern: '^2'
                  multiline.negate: true
                  multiline.match: after
                  multiline.max_lines: 500
                  multiline.timeout: 5s

    processors:
      - add_cloud_metadata:

    output.kafka:
      hosts: 
        - '1.2.3.4:9092'
        - '1.2.3.5:9092'
        - '1.2.3.6:9092'
      topic: 'dev-logging'
      partition.round_robin:
        reachable_only: false
      required_acks: 1
      compression: gzip
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.5.3
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlogpods
          mountPath: /var/log/pods
          readOnly: true
        - name: varlogcontainers
          mountPath: /var/log/containers
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config   
      - name: varlogpods
        hostPath:
          path: /var/log/pods
      - name: varlogcontainers
        hostPath:
          path: /var/log/containers
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1    
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
---

Any hints are greatly appreciated :slight_smile:

Hi @c_g,

Thank you for reporting, I see a few issues in this config:

You are mixing static docker input and autodiscover, that will create a collision between them. I would suggest to switch to use autodiscover only, that should enforce metadata to be present in all cases. You can add a template at the end, with no conditions, to act as the default setting when previous configs don't match.

Best regards

1 Like

Hi,

thanks so much for the tip! I thought the input was necessary and autodiscover only supplements it.

So, I remove the input, and am trying to add a second template, but when I skip the condition field or leave it empty filebeat would not start, stating it needs a condition.

It does work when I negate the condition from the first block, and it seems I get my k8s metadata now, but is there a more elegant way then simply copying the condition and prepending "not:"?

Best regards, Chris

Updating to filebeat 7.1 solves the "empty condition" error.

So, here's my working config, for reference:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                or:
                  - equals:
                      kubernetes.container.name: "some-container"
                  - equals:
                      kubernetes.container.name: "some-other-container"
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  multiline.pattern: '^2'
                  multiline.negate: true
                  multiline.match: after
                  multiline.max_lines: 500
                  multiline.timeout: 5s
                  fields:
                    type: kubernetes_log
                    cluster: dev
                    multiline: "true"
            - condition:
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  fields:
                    type: kubernetes_log
                    cluster: dev
                    multiline: "false"

    processors:
      - add_cloud_metadata:
      - add_kubernetes_metadata:
          in_cluster: true
          namespace: ${POD_NAMESPACE}

    output.kafka:
      hosts: 
        - '1.2.3.4:9092'
        - '1.2.3.5:9092'
        - '1.2.3.6:9092'
      topic: 'dev-logging'
      partition.round_robin:
        reachable_only: false
      required_acks: 1
      compression: gzip
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.1.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlogpods
          mountPath: /var/log/pods
          readOnly: true
        - name: varlogcontainers
          mountPath: /var/log/containers
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlogpods
        hostPath:
          path: /var/log/pods
      - name: varlogcontainers
        hostPath:
          path: /var/log/containers
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system

Logs multiline on select services, adds k8s metadata to all. :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.