Filebeats Getting Logs from Container in K8s

I have an express server running in k8s which writes its logs to a file:
'/var/lib/docker/containers/analytics-error.log'

I cannot get Filebeat to pick up this file, here is my conf

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false
      setup.template.settings:
        index.number_of_shards: 1

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
    # filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    filebeat.inputs:
      - type: log
        paths:
          - /var/lib/docker/containers
        fields_under_root: true
        fields:
          log_type: etp_log

    logging.level: debug
    logging.selectors: ["prospector","harvester"]

    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~

    output.logstash:
      enabled: true
      ssl.enabled: true

      hosts: ["logstash.chargepayments.io:8751"]
      ssl.certificate_authorities: ["${path.config}/keys/ca.crt"]
      ssl.certificate: "${path.config}/keys/client.crt"
      ssl.key: "${path.config}/keys/client.key"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      path: "/var/lib/docker/containers"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.0.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: cacert
          mountPath: /usr/share/filebeat/keys/ca.crt
          subPath: ca.crt
        - name: clientcert
          mountPath: /usr/share/filebeat/keys/client.crt
          subPath: client.crt
        - name: clientkey
          mountPath: /usr/share/filebeat/keys/client.key
          subPath: client.key

      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
      - name: cacert
        secret:
          secretName: certs
          items:
          - key: cacert
            path: ca.crt
      - name: clientcert
        secret:
          secretName: certs
          items:
          - key: clientcert
            path: client.crt
      - name: clientkey
        secret:
          secretName: certs
          items:
          - key: clientkey
            path: client.key

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: ["*"] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

Hey @NFhbar
I'm not so familiar with the setup you mentioned, is var/log/container/.. accessible to filebeat?
Have you tried auto discover feature of filebeat? it reads /var/log/container for running containers and based on module used you could probably achieve the same.
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html#_kubernetes

@Michal_Pristas
Seems that var/log/container/.. is accessible to filebeat since I am receiving logs from the containers. I am also writing my own logs from my express server to that location but filebeat is not able to read these files.
The way I solved this is that I am logging directly to the console from my express server and filebeat is able to get those, but I am not sure why the log file is not being picked up.

I'll look into auto discover, thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.