Filebeat white/black list for container logs in kubernetes

Hi, I have a kubernetes cluster with 3 nodes and I installed and uped filebeat 7.6 for shipping logs from our containers from the cluster with this link configuration.
I wanna exclude some containers or pods from logging and stop filebeat stop collecting their logs. the problem with the probable answer to this situation, exclude_files option, is this only works when we have the exact directory of the log files whereas in kubernetes filebeat find log files of each container based on a wildcard address like: "/var/log/containers/*.log".
the problem is the container ID of each microservice is changing and it's a hash number and this makes exclusion/inclusion impossible!
what's the real solution to this problem?
thanks for your notice.
Best Regards

Hi @Vahid_Mouasvi!

How about leveraging Autodiscover and define conditions using labels for instance?

Hi @ChrsMark,
Autodiscover won't work properly I'm afraid and also it adds extra computational load for processing kubernetes pod labels! it's like dropping with checking some params not ignoring the whole log file in the first place! do u have any config with Autodiscover?
regards!

Autodiscover will not start collecting logs from log files of containers that do not match the conditions. The link I posted above with the documentation includes examples of how Autodiscover is being configured.

C.

I configure autodiscover like below but it didn't change anything! the filebeat still collecting log from all containers:
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.container.name: weave-npc
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log

I think filebeat go to mouted path /var/lib/docker/containers and collect all logs from there and both filebeat input and autodiscover cannot change it to the path var/log/containers
I'm stuck on this step! please help me out

Hi!

Please provide a complete configuration file so as to have the big picture (well formated plz :), use tripple ` to surround multiline code pieces)

here is my current configuration

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
   # filebeat.inputs:
   # - type: container
   #   paths:
   #     - /var/log/containers/*.log
   #   exclude_files:
   #     - filebeat-.*\.log
   #     - kube-state-metrics-.*\.log
   #   multiline.pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^'
   #   multiline.negate: false
   #   multiline.match: after
   #   fields_under_root: true
   #   scan_frequency: 15s
   #   json.message_key: log
   #   json.keys_under_root: true
   #   symlinks: true
   #   processors:
   #     - add_kubernetes_metadata:
   #         host: ${NODE_NAME}
   #         matchers:
   #         - logs_path:
   #             logs_path: "/var/log/containers/"
   #     -  drop_event:
   #         when:
   #           equals:
   #             kubernetes.container.name: "kube-state-metrics"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                equals:
                  kubernetes.container.name: greendb
              config:
                - type: container
                  paths:
                    - /var/log/containers/*-${data.kubernetes.container.id}.log

    setup.kibana.host: "kibana:3245"
    setup.kibana.protocol: "http"
    setup.dashboards.enabled: true
    setup.template.enabled: true
    processors:
      - add_docker_metadata:
          host: "unix:///var/run/docker.sock"

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch.logging
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 150Mi
          requests:
            cpu: 50m
            memory: 150Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          #mountPath: /var/log/containers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          #path: /var/log/containers
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

Your configuration looks ok. Could you set logging level to debug and check the logs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.