I m running an pod with one container writing into files and one sidecar. The sidecars simply tail'ing the mounted file from the main container to stdout.
Have tested nearly the whole day to get the lines into filebeat (7.4.2), but its not working.
Here is my configmap yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
include_annotations: ['co.elastic.logs/fileset.stderr','co.elastic.logs/fileset.stdout']
templates:
- condition:
contains:
kubernetes.container.name: "sc-errorlog"
config:
- type: docker
containers.ids:
- "*"
output.logstash:
hosts: ['${LOGSTASH_HOST}:${LOGSTASH_PORT}']
My assumption: The arising container is visible for autodiscovery based on the container name "sc-errorlog", than the stdout of that container gets harvested by Filebeat, that is running on this node, via deamonset.
The first step looks ok, means Filebeat logs that the container gets inspected
INFO input/input.go:114 Starting input of type: docker; ID: 17399625233690692518
but thats all. Nothing more happens.
Question: Is it possible to collect stdout from an kubernetes container based on autodiscovery?
This is the pod example I am running from https://kubernetes.io/docs/concepts/cluster-administration/logging/
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: sc-errorlog
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}