Getting strange errors after enabling kubernetes metadata in filebeat

Getting below errors when running filebeat

evel":"error","@timestamp":"2023-06-06T08:37:11.110Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/continers/*.log/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T08:37:11.110Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T08:37:11.110Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/continers/*.log/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T08:37:11.110Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T08:37:11.110Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/continers/*.log/'.","service.name":"filebeat","ecs.version":"1.6.0"}

please find below filebeat.yml file

filebeat.inputs:
- type: container
  paths: 
    - '/var/log/containers/*.log'
processors:
    - add_kubernetes_metadata:
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: /var/log/continers/*.log
output.elasticsearch:
  hosts: ["http://<host>:9200"]

PFB deployment file

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: elk
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      #terminationGracePeriodSeconds: 30
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeatcs
        image: docker.elastic.co/beats/filebeat:8.5.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: xxxxx
        - name: ELASTICSEARCH_PORT
          value: "xxx"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: azure
          mountPath: /var
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      - azureFile:
          readOnly: false
          secretName: <secret-name>
          shareName: <share-name>
        name: azure
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

would appreciate if anyone can help me if i am missing something.

Probably a typo: logs_path: /var/log/continers/*.log to logs_path: /var/log/containers/*.log

Does this fix the error?

@Andreas_Gkizas - there was a typo. thanks for it. but after fixing still receiving the same errors. PFB

ta/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T10:02:44.142Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/containers/*.log/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T10:02:44.142Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T10:02:44.142Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/containers/*.log/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T10:02:44.142Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T10:02:44.142Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/containers/*.log/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-06-06T10:02:44.142Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}


filebeat.inputs:
- type: container
  paths: 
    - '/var/log/containers/*.log'
processors:
    - add_kubernetes_metadata:
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: /var/log/containers/*.log
output.elasticsearch:
  hosts: ["http://10.44.38.153:9200"]

Can you try remove the add_kubernetes_metadata part from processors?

In 8.5 version is enabled by default, so I guess all your metadata enrichment should work. Let me know

Also by checking the online manifests I see the following path:
- /var/log/containers/*-${data.kubernetes.container.id}.log

I am not sure what files your logpath contains but also can give it a try

After removing kubernetes_metadata from processor section, getting the error below

{"log.level":"error","@timestamp":"2023-06-06T11:13:27.604Z","log.origin":{"file.name":"instance/beat.go","file.line":1056},"message":"Exiting: Failed to start crawler: starting input failed: could not unpack config: missing field accessing 'filebeat.inputs.0.paths' (source:'/etc/filebeat.yml')","service.name":"filebeat","ecs.version":"1.6.0"}
Exiting: Failed to start crawler: starting input failed: could not unpack config: missing field accessing 'filebeat.inputs.0.paths' (source:'/etc/filebeat.yml')

filebeat.inputs:
- type: container
  paths: 
    - '/var/log/containers/*-${data.kubernetes.container.id}.log'
processors:
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: /var/log/containers/*-${data.kubernetes.container.id}.log
output.elasticsearch:
  hosts: ["http://<elastic-host>:9200"]

Am i missing something

@Andreas_Gkizas - let me know if any more details needed

Basic idea was to read pod logs written to /var/log/containers/<id.log> to Elastic along with kubernetes metadata to filter the logs as per requirement.

Filebeat does not seem to be working as expected

I had made a local setup and tested with following config:
In 8.5.0

filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

Everything works on myside. So please fix any indentation (eg processors are in the path level) and the logs path to be logs_path: "/var/log/containers/"
Make also sure that no other filebeat instance run in your cluster
If this does not work I would need:

The filbeat.yaml file , some logs from your filebeat pod and also the output of kubectl get pods -A

Hope that helps!

Also noticed one more minor diff:
Your defaultMode: 0600 but in the online example manifest is defaultMode: 0640

@Andreas_Gkizas - Many thanks for looking into the issue.

I tried the filebeat config which you posted. i see some permission issue popping up also kibana isn't displaying any kubernetes fields

","file.line":115},"message":"Template \"filebeat-8.5.0\" already exists and will not be overwritten.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-06-06T14:23:08.013Z","log.logger":"index-management","log.origin":{"file.name":"idxmgmt/std.go","file.line":267},"message":"Loaded index template.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-06-06T14:23:08.013Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":147},"message":"Connection to backoff(elasticsearch(http://10.44.38.153:9200)) established","service.name":"filebeat","ecs.version":"1.6.0"}
W0606 14:23:09.193216       7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
E0606 14:23:09.193262       7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
W0606 14:23:10.821751       7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
E0606 14:23:10.821794       7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
W0606 14:23:15.192440       7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
E0606 14:23:15.192488       7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
W0606 14:23:22.006979       7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope
E0606 14:23:22.007018       7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "aks-app-20958021-vmss00000e" is forbidden: User "system:serviceaccount:elk:filebeat" cannot list resource "nodes" in API group "" at the cluster scope



PFB get pods o/p

kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
es-cluster-0              1/1     Running   0          19d
es-cluster-1              1/1     Running   0          19d
es-cluster-2              1/1     Running   0          19d
filebeat-5t5b5            1/1     Running   0          102s
filebeat-6njx6            1/1     Running   0          102s
filebeat-7xcwr            1/1     Running   0          102s
filebeat-btmjk            1/1     Running   0          102s
filebeat-gnw42            1/1     Running   0          102s
filebeat-qsgnt            1/1     Running   0          102s
filebeat-tjb5c            1/1     Running   0          102s
filebeat-v8m6j            1/1     Running   0          102s

Deployment file

``
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: elk
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      #terminationGracePeriodSeconds: 30
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:8.5.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: xx.xx.xx.xx
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - mountPath: /var/data
          name: vardata
          readOnly: true
      volumes:
      - configMap:
          defaultMode: 0640
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/data
          type: ""
        name: vardata
      - hostPath:
          path: /var/log/pods
          type: ""
        name: varlibdockercontainers
      - emptyDir: {}
        name: data

`

filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
output.elasticsearch:
  hosts: ["http://xxxx:9200"]

Ok I see you are deploying in ELK namespace and seems that your service account serviceaccount:elk:filebeat can not perform the needed actions

Please advise from the below:

The service account should be in namespace beats/filebeat-kubernetes.yaml at main · elastic/beats · GitHub .

Please reconfigure accordingly and I think you should be ok

Thank you for sharing. i will test it and update.

Finally it worked for me. Thanks alot :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.