Filebeat is not shipping all pod logs for deployment on kubernetes

Hi All,
I was trying to ship all application logs inside all pods for a deployment on kubernetes setup using filebeat daemonset.

Configuration:


apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: mf
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
in_cluster: true
tags:
- "kubernetes"
templates:
- condition:
or:
- equals:
kubernetes.namespace: vignesh
kubernetes.labels.app: reachwv
output:
kafka:
hosts:
- 192.168.8.178:9092
topic: mf_reachwv_logs
version: 0.8.2
bulk_max_size: 1024
timeout: 30s
broker_timeout: 10s

apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: mf
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers:
path: "/var/lib/docker/overlay2/*/merged/usr/local/mf/logs/awp.log"
stream: "stdout"
ids:
- "${data.kubernetes.container.id}"
processors:
- add_kubernetes_metadata:
in_cluster: true

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: mf
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.7.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/overlay2
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/overlay2
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:

  • kind: ServiceAccount
    name: filebeat
    namespace: mf
    roleRef:
    kind: ClusterRole
    name: filebeat
    apiGroup: rbac.authorization.k8s.io

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:

  • apiGroups: [""] # "" indicates the core API group
    resources:
    • namespaces
    • pods
      verbs:
    • get
    • watch
    • list

apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: mf
labels:
k8s-app: filebeat

filebeat output:

2019-04-03T11:43:10.525Z INFO instance/beat.go:280 Setup Beat: filebeat; Version: 6.7.0
2019-04-03T11:43:10.525Z INFO [publisher] pipeline/module.go:110 Beat name: filebeat-gvqhg
2019-04-03T11:43:10.526Z INFO instance/beat.go:402 filebeat start running.
2019-04-03T11:43:10.527Z INFO registrar/registrar.go:134 Loading registrar data from /usr/share/filebeat/data/registry
2019-04-03T11:43:10.527Z INFO registrar/registrar.go:141 States Loaded from registrar: 4
2019-04-03T11:43:10.527Z WARN beater/filebeat.go:367 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-04-03T11:43:10.528Z INFO crawler/crawler.go:72 Loading Inputs: 0
2019-04-03T11:43:10.528Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 0
2019-04-03T11:43:10.528Z WARN [cfgwarn] kubernetes/kubernetes.go:55 BETA: The kubernetes autodiscover is beta
2019-04-03T11:43:10.528Z INFO kubernetes/util.go:86 kubernetes: Using pod name filebeat-gvqhg and namespace mf to discover kubernetes node
2019-04-03T11:43:10.529Z INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
2019-04-03T11:43:10.731Z INFO kubernetes/util.go:93 kubernetes: Using node k8s-worker1 discovered by in cluster pod node query
2019-04-03T11:43:10.734Z INFO autodiscover/autodiscover.go:104 Starting autodiscover manager
2019-04-03T11:43:10.735Z INFO kubernetes/watcher.go:182 kubernetes: Performing a resource sync for *v1.PodList
2019-04-03T11:43:10.737Z INFO kubernetes/watcher.go:198 kubernetes: Resource sync done
2019-04-03T11:43:10.739Z INFO kubernetes/watcher.go:242 kubernetes: Watching API for resource events

I seen application logs are storing in local node in /var/lib/docker/overlay2/
Plz help me out on monitoring logs inside pod.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.