PFB filebeat.yaml
filebeat.inputs:
- type: container
paths:
- '/var/log/containers/*.log'
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: /var/log/continers/*.log
output.elasticsearch:
hosts: ["http://<Elastic-host>:9200"]
filebeat-deployment
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
serviceAccountName: filebeat
#terminationGracePeriodSeconds: 30
#hostNetwork: true
#dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.5.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: LOGSTASH_URL
value: "<logstash-url>:5044"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: azure
mountPath: /var
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- azureFile:
readOnly: false
secretName: logs
shareName: logs
name: azure
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
I see the data are indexed to Elastic, but fields like Kubernetes namespace, container etc are missing for filter.
I see some errors as below in filebeat services
bernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-01-20T15:39:43.815Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/log/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-01-20T15:39:43.815Z","log.logger":"kubernetes","log.origin":{"file.name":"add_kubernetes_metadata/matchers.go","file.line":95},"message":"Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.","service.name":"filebeat","ecs.version":"1.6.0"}
I found the error by seeing the filebeat logs. The kubernetes metadata was not able to find the container logs. updated the correct location of logs after checking under filebeat pod. works perfectly now.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.