Hi, I am trying to setup filebeat in a kubernetes cluster. I only want to gather logs from specific pods.
I decided to run filebeat as daemonset and use the autodiscover and hints. So I can decide by setting pod annotations which contianers I want to monitor in the elastic stack.
My filebeat config looks like this:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log # CRI path
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
We have have an nginx container running. We have no custom logging config, we just copy the website into nginx container during the docker build - using the default config of the nginx container.
The pod has the following annotations:
apiVersion: v1
kind: Pod
metadata:
annotations:
app.gitlab.com/app: myapp
app.gitlab.com/env: myenv
cni.projectcalico.org/podIP: REMOVED
cni.projectcalico.org/podIPs: REMOVED
co.elastic.logs/enabled: "true"
co.elastic.logs/module: nginx
kubernetes.io/psp: global-unrestricted-psp
prometheus.io/scrape: "false"
But in Kibana I see that all events have not been parsed correctly.
The value of the field error.message is this:
Provided Grok expressions do not match field value: [192.168.1.1 - - [13/Jan/2022:14:28:20 +0100] \"GET / HTTP/1.1\" 200 536 \"-\" \"kube-probe/1.19\" \"-\"\n192.168.1.1 - - [13/Jan/2022:14:28:21 +0100] \"GET / HTTP/1.1\" 200 536 \"-\" \"kube-probe/1.19\" \"-\"]
What do I need to do to parse the default nginx logs successfully?
Later I also want to parse the ingress-nginx controller of kubernetes to, but that will be next step.
Thanks in advance,
Andreas