Hi there!
After upgrading the K8S ver from 1.10 to 1.14.1 ver, our filebeat stoped to write logs.
I did found something in k8s doc - The container log directory changed from /var/lib/docker/
to /var/log/pods/
. If you use your own logging solution that monitors the previous directory, update accordingly.
But still, not working.
/var/log/pods contatins "containers" folder, which is empty.
Could you please advice? (ELK ver 6.6.2)
Attaching my DS yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: "2018-06-26T10:40:05Z"
generation: 11
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
name: filebeat
namespace: kube-system
resourceVersion: "67464142"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/filebeat
uid: 4a0a1e57-792d-11e8-9321-528d05d8e1e1
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
template:
metadata:
creationTimestamp: null
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
containers:
- args:
- -c
- /etc/filebeat.yml
- -e
env:
- name: LOGSTASH_HOSTS
value: logstash-kube:5044
image: docker.elastic.co/beats/filebeat:6.6.2
imagePullPolicy: IfNotPresent
name: filebeat
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
procMount: Default
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/filebeat.yml
name: config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/prospectors.d
name: prospectors
readOnly: true
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/log/pods/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 384
name: filebeat-config
name: config
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/log/pods/containers
type: ""
name: varlibdockercontainers
- configMap:
defaultMode: 384
name: filebeat-prospectors
name: prospectors
- emptyDir: {}
name: data
templateGeneration: 11
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 6
desiredNumberScheduled: 6
numberAvailable: 6
numberMisscheduled: 0
numberReady: 6
observedGeneration: 10
updatedNumberScheduled: 6
Configs
apiVersion: v1
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
scan_frequency: 10s
close_inactive: 1m
kind: ConfigMap
metadata:
creationTimestamp: "2019-02-19T19:55:05Z"
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
name: filebeat-prospectors
namespace: kube-system
resourceVersion: "42416935"
selfLink: /api/v1/namespaces/kube-system/configmaps/filebeat-prospectors
uid: 40568db5-3480-11e9-9194-72f87630c359
And
apiVersion: v1
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ${LOGSTASH_HOSTS:?No logstash host configured. Use env var LOGSTASH_HOSTS to set hosts.}
kind: ConfigMap
metadata:
creationTimestamp: "2018-06-26T10:39:24Z"
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
name: filebeat-config
namespace: kube-system
resourceVersion: "130661"
selfLink: /api/v1/namespaces/kube-system/configmaps/filebeat-config
Could you advice please? Thx!
Aleksei