Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.

ELK stack versions:
FIlebeat - 6.4.1
Logstash - 6.3.1
elastic - 6.5.4 &
kibana - 6.5.4

Please find the template for the same,

apiVersion: v1
kind: Template
metadata:
  name: logstash-filebeat
  annotations:
    description: logstash and filebeat template for openshift (version 6.3.1/6.4.1)
    tags: log,storage,data,visualization
objects:
- apiVersion: v1
  kind: SecurityContextConstraints
  metadata:
    name: hostpath
  allowPrivilegedContainer: true
  allowHostDirVolumePlugin: true
  runAsUser:
    type: RunAsAny
  seLinuxContext:
    type: RunAsAny
  fsGroup:
    type: RunAsAny
  supplementalGroups:
    type: RunAsAny
  users:
  - my-admin-user
  groups:
  - my-admin-group
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: logging-configmap
  data:
    logstash.yml: |
      http.host: "0.0.0.0"
      http.port: 5044
      path.config: /usr/share/logstash/pipeline
      pipeline.workers: 1
      pipeline.output.workers: 1
      xpack.monitoring.enabled: false
    logstash.conf: |
     input {
       beats {
         client_inactivity_timeout => 86400
         port => 5044
       }
     }
     filter {
       if "beats_input_codec_plain_applied" in [tags] {
         mutate {
           rename => ["log", "message"]
           add_tag => [ "DBBKUP", "kubernetes" ]
         }
         mutate {
             remove_tag => ["beats_input_codec_plain_applied"]
         }
         date {
           match => ["time", "ISO8601"]
           remove_field => ["time"]
         }
         grok {
             #match => { "source" => "/var/log/containers/%{DATA:pod_name}_%{DATA:namespace}_%{GREEDYDATA:container_name}-%{DATA:container_id}.log" }
             #remove_field => ["source"]
             match => { "message" => "%{TIMESTAMP_ISO8601:LogTimeStamp}%{SPACE}%{GREEDYDATA:Message}" }
             remove_field => ["message"]
             add_tag => ["DBBKUP"]
         }

         if "DBBKUP" in [tags] and "vz1-warrior-job" in [kubernetes][pod][name] {
           grok {
             match => { "message" => "%{GREEDYDATA:bkupLog}" }
             remove_field => ["message"]
             add_tag => ["WARJOBS"]
             remove_tag => ["DBBKUP"]
           }
         }
       }
     }

     output {
          elasticsearch {
             #hosts => "localhost:9200"
              hosts => "index.elastic:9200"
              manage_template => false
              index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
              #document_type => "%{[@metadata][type]}"
          }
     }
    filebeat.yml: |
      #filebeat.registry_file: /var/tmp/filebeat/filebeat_registry # store the registry on the host filesystem so it doesn't get lost when pods are stopped
      filebeat.autodiscover:
        providers:
          - type: kubernetes
            tags:
              - "kube-logs"
            templates:
              - condition:
                  or:
                    - contains:
                        kubernetes.pod.name: "db-backup-ne-mgmt"
                    - contains:
                        kubernetes.pod.name: "db-backup-list-manager"
                    - contains:
                        kubernetes.pod.name: "db-backup-scheduler"
                config:
                  - type: docker
                    containers.ids:
                      - "${data.kubernetes.container.id}"
                    multiline.pattern: '^[[:space:]]'
                    multiline.negate: false
                    multiline.match: after
      processors:
        - drop_event:
            when.or:
               - equals:
                   kubernetes.namespace: "kube-system"
               - equals:
                   kubernetes.namespace: "default"
               - equals:
                   kubernetes.namespace: "logging"
      output.logstash:
       hosts: ["logstash-service.logging:5044"]
       index: filebeat

      setup.template.name: "filebeat"
      setup.template.pattern: "filebeat-*"
    kibana.yml: |
     elasticsearch.url: "http://index.elastic:9200"
- apiVersion: v1
  kind: Service
  metadata:
    name: logstash-service
  spec:
    clusterIP:
    externalTrafficPolicy: Cluster
    ports:
    - nodePort: 31481
      port: 5044
      protocol: TCP
      targetPort: 5044
    selector:
      app: logstash
    sessionAffinity: None
    type: NodePort
  status:
    loadBalancer: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    labels:
      app: logstash
    name: logstash-deployment
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: logstash
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        labels:
          app: logstash
      spec:
        containers:
        - env:
          - name: ES_VERSION
            value: 2.4.6
          image: docker.elastic.co/logstash/logstash:6.3.1
          imagePullPolicy: IfNotPresent
          name: logstash
          ports:
          - containerPort: 5044
            protocol: TCP
          resources:
            limits:
              cpu: "1"
              memory: 4Gi
            requests:
              cpu: "1"
              memory: 4Gi
          volumeMounts:
          - mountPath: /usr/share/logstash/config
            name: config-volume
          - mountPath: /usr/share/logstash/pipeline
            name: logstash-pipeline-volume
        volumes:
        - configMap:
            items:
            - key: logstash.yml
              path: logstash.yml
            name: logging-configmap
          name: config-volume
        - configMap:
            items:
            - key: logstash.conf
              path: logstash.conf
            name: logging-configmap
          name: logstash-pipeline-volume
- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    labels:
      app: filebeat
    name: filebeat
  spec:
    selector:
      matchLabels:
        app: filebeat
    template:
      metadata:
        labels:
          app: filebeat
        name: filebeat
      spec:
        serviceAccountName: filebeat-serviceaccount
        containers:
        - args:
          - -e
          - -path.config
          - /usr/share/filebeat/config
          command:
          - /usr/share/filebeat/filebeat
          env:
          - name: LOGSTASH_HOSTS
            value: logstash-service:5044
          - name: LOG_LEVEL
            value: info
          - name: FILEBEAT_HOST
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          image: docker.elastic.co/beats/filebeat:6.4.1
          imagePullPolicy: IfNotPresent
          name: filebeat
          resources:
            limits:
              cpu: 500m
              memory: 4Gi
            requests:
              cpu: 500m
              memory: 4Gi
          volumeMounts:
          - mountPath: /usr/share/filebeat/config
            name: config-volume
          - mountPath: /var/log/hostlogs
            name: varlog
            readOnly: true
          - mountPath: /var/log/containers
            name: varlogcontainers
            readOnly: true
          - mountPath: /var/log/pods
            name: varlogpods
            readOnly: true
          - mountPath: /var/lib/docker/containers
            name: varlibdockercontainers
            readOnly: true
          - mountPath: /var/tmp/filebeat
            name: vartmp
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
          runAsUser: 0
          privileged: true
        tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
        volumes:
        - hostPath:
            path: /var/log
            type: ""
          name: varlog
        - hostPath:
            path: /var/tmp
            type: ""
          name: vartmp
        - hostPath:
            path: /var/log/containers
            type: ""
          name: varlogcontainers
        - hostPath:
            path: /var/log/pods
            type: ""
          name: varlogpods
        - hostPath:
            path: /var/lib/docker/containers
            type: ""
          name: varlibdockercontainers
        - configMap:
            items:
            - key: filebeat.yml
              path: filebeat.yml
            name: logging-configmap
          name: config-volume

- apiVersion: rbac.authorization.k8s.io/v1beta1
  kind: ClusterRoleBinding
  metadata:
    name: filebeat-clusterrolebinding
    namespace: logging
  subjects:
  - kind: ServiceAccount
    name: filebeat-serviceaccount
    namespace: logging
  roleRef:
    kind: ClusterRole
    name: filebeat-clusterrole
    apiGroup: rbac.authorization.k8s.io

- apiVersion: rbac.authorization.k8s.io/v1beta1
  kind: ClusterRole
  metadata:
    name: filebeat-clusterrole
    namespace: logging
  rules:
  - apiGroups: [""] # "" indicates the core API group
    resources:
    - namespaces
    - pods
    verbs:
    - get
    - watch
    - list

- apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: filebeat-serviceaccount
    namespace: logging

The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.

So I found that from elastic forums that the following iptables command would resolve my issue but no luck,

iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10

Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.

NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.

Hi!

So Logstash is reachable from Filebeat pod but Filebeat is not sending data?

Could you start Filebeat in debug level and check if there is anything problematic in the logs?

@ChrsMark Thanks for your reply.

Yes Logstash is reachable from Filebeat.

I guess Filebeat is unable to send data to Logstash since there is no data polling on the Logstash listener,

Please find the Filebeat logs without debug mode, pods are not showing up if I run filebeat in debug mode,

[centos@fwma-master elk]$ oc logs -f filebeat-c7sg2 -n logging
2020-05-05T04:59:20.301Z	INFO	instance/beat.go:544	Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat/config] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2020-05-05T04:59:20.309Z	INFO	instance/beat.go:551	Beat UUID: aed0e259-a8cf-406c-a1a6-befc65d89ef1
2020-05-05T04:59:20.309Z	INFO	[seccomp]	seccomp/seccomp.go:116	Syscall filter successfully installed
2020-05-05T04:59:20.309Z	INFO	[beat]	instance/beat.go:768	Beat info	{"system_info": {"beat": {"path": {"config": "/usr/share/filebeat/config", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "aed0e259-a8cf-406c-a1a6-befc65d89ef1"}}}
2020-05-05T04:59:20.309Z	INFO	[beat]	instance/beat.go:777	Build info	{"system_info": {"build": {"commit": "37b5f2d2a20f2734b2373a454b4b4cbb2627e841", "libbeat": "6.4.1", "time": "2018-09-13T21:25:47.000Z", "version": "6.4.1"}}}
2020-05-05T04:59:20.309Z	INFO	[beat]	instance/beat.go:780	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.10.3"}}}
2020-05-05T04:59:20.312Z	INFO	[beat]	instance/beat.go:784	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-04-29T08:13:29Z","containerized":true,"hostname":"filebeat-c7sg2","ips":["127.0.0.1/8","::1/128","10.129.0.54/23","fe80::802b:82ff:fe29:c651/64"],"kernel_version":"3.10.0-1062.18.1.el7.x86_64","mac_addresses":["0a:58:0a:81:00:36"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":5,"patch":1804,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0,"id":"14759c8d771e43a2b10f7402e8060d8a"}}}
2020-05-05T04:59:20.312Z	INFO	[beat]	instance/beat.go:813	Process info	{"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-05-05T04:59:19.160Z"}}}
2020-05-05T04:59:20.312Z	INFO	instance/beat.go:273	Setup Beat: filebeat; Version: 6.4.1
2020-05-05T04:59:20.313Z	INFO	pipeline/module.go:98	Beat name: filebeat-c7sg2
2020-05-05T04:59:20.314Z	INFO	[monitoring]	log/log.go:114	Starting metrics logging every 30s
2020-05-05T04:59:20.314Z	INFO	instance/beat.go:367	filebeat start running.
2020-05-05T04:59:20.314Z	INFO	registrar/registrar.go:97	No registry file found under: /usr/share/filebeat/data/registry. Creating a new registry file.
2020-05-05T04:59:20.337Z	INFO	registrar/registrar.go:134	Loading registrar data from /usr/share/filebeat/data/registry
2020-05-05T04:59:20.337Z	INFO	registrar/registrar.go:141	States Loaded from registrar: 0
2020-05-05T04:59:20.337Z	WARN	beater/filebeat.go:371	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-05-05T04:59:20.337Z	INFO	crawler/crawler.go:72	Loading Inputs: 0
2020-05-05T04:59:20.337Z	INFO	crawler/crawler.go:106	Loading and starting Inputs completed. Enabled inputs: 0
2020-05-05T04:59:20.337Z	WARN	[cfgwarn]	kubernetes/kubernetes.go:51	BETA: The kubernetes autodiscover is beta
2020-05-05T04:59:20.338Z	INFO	kubernetes/util.go:86	kubernetes: Using pod name filebeat-c7sg2 and namespace logging to discover kubernetes node
2020-05-05T04:59:20.400Z	INFO	kubernetes/util.go:93	kubernetes: Using node node2.novalocal discovered by in cluster pod node query
2020-05-05T04:59:20.400Z	INFO	autodiscover/autodiscover.go:102	Starting autodiscover manager
2020-05-05T04:59:20.401Z	INFO	kubernetes/watcher.go:180	kubernetes: Performing a resource sync for *v1.PodList
2020-05-05T04:59:20.410Z	INFO	kubernetes/watcher.go:194	kubernetes: Resource sync done
2020-05-05T04:59:20.411Z	INFO	kubernetes/watcher.go:238	kubernetes: Watching API for resource events
2020-05-05T04:59:20.411Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.412Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/6b982a904d28be51680359354933341765481440b9f6623f38731055ed579967/*.log]
2020-05-05T04:59:20.412Z	INFO	input/input.go:114	Starting input of type: docker; ID: 15166487955143448204 
2020-05-05T04:59:20.412Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.412Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/da89d185fdae79318802701a0ece717dc4d267fd8c91be177d492fd79d2e403b/*.log]
2020-05-05T04:59:20.412Z	INFO	input/input.go:114	Starting input of type: docker; ID: 5961691800222948832 
2020-05-05T04:59:20.412Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.413Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/2ea65753831ac3725e0f8666c7d46c1124b55ff55cc69d7fa28a42e79191fc5c/*.log]
2020-05-05T04:59:20.413Z	INFO	input/input.go:114	Starting input of type: docker; ID: 1122486382829614229 
2020-05-05T04:59:20.413Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.413Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/d83d91fa4cf8b44a270d5579c4f53f570620aa6c3e5b6618f93a611ade84a940/*.log]
2020-05-05T04:59:20.413Z	INFO	input/input.go:114	Starting input of type: docker; ID: 2366798464802198058 
2020-05-05T04:59:20.413Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.413Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/a17b5de9acde2f62e777510735f968606021ad2b50d2031f47dc66aad9f54f6a/*.log]
2020-05-05T04:59:20.413Z	INFO	input/input.go:114	Starting input of type: docker; ID: 2389586874824463 
2020-05-05T04:59:20.413Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.414Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/c8b87686e4c9347879c901abe148b946f81028a9fbfc2346cfd3159f490cd438/*.log]
2020-05-05T04:59:20.414Z	INFO	input/input.go:114	Starting input of type: docker; ID: 413666671966090677 
2020-05-05T04:59:20.414Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.414Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/2dbf15f6189b10475818e0a819a13a6d3b93d989cfe5c74dca0f214e8280d068/*.log]
2020-05-05T04:59:20.414Z	INFO	input/input.go:114	Starting input of type: docker; ID: 8753851763796143602 
2020-05-05T04:59:20.414Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.414Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/f061c0749db0c11db05d6f8aad7a1f74c3f536d64bf20f1052879f0046ad3320/*.log]
2020-05-05T04:59:20.415Z	INFO	input/input.go:114	Starting input of type: docker; ID: 9426595889150449663 
2020-05-05T04:59:20.415Z	WARN	[cfgwarn]	docker/input.go:46	EXPERIMENTAL: Docker input is enabled.
2020-05-05T04:59:20.415Z	INFO	log/input.go:138	Configured paths: [/var/lib/docker/containers/ea4c231bc2ee1073f097b5b63a82b349973013787a4e5f0eaba4390e1622b942/*.log]
2020-05-05T04:59:20.415Z	INFO	input/input.go:114	Starting input of type: docker; ID: 7588078993990071377 
2020-05-05T04:59:50.318Z	INFO	[monitoring]	log/log.go:141	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":{"ms":40}},"total":{"ticks":70,"time":{"ms":79},"value":70},"user":{"ticks":30,"time":{"ms":39}}},"info":{"ephemeral_id":"83376cfb-172e-465a-a417-75a0badebd3c","uptime":{"ms":30026}},"memstats":{"gc_next":4194304,"memory_alloc":3198528,"memory_total":6017296,"rss":15699968}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":9,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":4},"load":{"1":0.29,"15":0.88,"5":0.6,"norm":{"1":0.0725,"15":0.22,"5":0.15}}}}}}

And I do no see any specific errors from the Logstash & Filebeat logs.

Thanks for sharing the logs! You are correct that there is nothing showing an error. However in the last log I see that only one event is reported ("writes":{"success":1,"total":1}}) which makes me think that logs are not collected.

We need to see a more verbose logging here, did you try to set log level and didn't work? Another option here would be to set the output to console and see if logs are collected: https://www.elastic.co/guide/en/beats/filebeat/current/console-output.html

No I did not try the log level yet.

I have added the debug log level and set the output to console. I see the services logs in the filebeat log.

Please find attached the logfile. The logfile is quiet big so only shared scheduler service portion of the entire log.

2020-05-06T05:35:47.272Z	DEBUG	[kubernetes]	kubernetes/kubernetes.go:107	Watcher Pod add: metadata:<name:"db-backup-scheduler-5475b6885b-xzsvx" generateName:"db-backup-scheduler-5475b6885b-" namespace:"vz1-db-backup" selfLink:"/api/v1/namespaces/vz1-db-backup/pods/db-backup-scheduler-5475b6885b-xzsvx" uid:"56e4d790-8f5b-11ea-a4e2-fa163ec2dc37" resourceVersion:"5612197" generation:0 creationTimestamp:<seconds:1588743305 nanos:0 > labels:<key:"app" value:"db-backup-scheduler" > labels:<key:"pod-template-hash" value:"1031624416" > annotations:<key:"openshift.io/scc" value:"anyuid" > ownerReferences:<apiVersion:"apps/v1" kind:"ReplicaSet" name:"db-backup-scheduler-5475b6885b" uid:"56e10731-8f5b-11ea-a4e2-fa163ec2dc37" controller:true blockOwnerDeletion:true > clusterName:"" > spec:<volumes:<name:"default-token-dnmkl" volumeSource:<secret:<secretName:"default-token-dnmkl" defaultMode:420 > > > containers:<name:"db-backup-scheduler" image:"fnc-docker-reg:3000/scheduler:RL1.2.0" workingDir:"" ports:<name:"" hostPort:0 containerPort:8002 protocol:"TCP" hostIP:"" > resources:<> volumeMounts:<name:"default-token-dnmkl" readOnly:true mountPath:"/var/run/secrets/kubernetes.io/serviceaccount" subPath:"" > terminationMessagePath:"/dev/termination-log" terminationMessagePolicy:"File" imagePullPolicy:"IfNotPresent" securityContext:<capabilities:<drop:"MKNOD" > > stdin:false stdinOnce:false tty:false > restartPolicy:"Always" terminationGracePeriodSeconds:30 dnsPolicy:"ClusterFirst" nodeSelector:<key:"node-role.kubernetes.io/compute" value:"true" > serviceAccountName:"default" serviceAccount:"default" nodeName:"node3.novalocal" hostNetwork:false hostPID:false hostIPC:false securityContext:<seLinuxOptions:<user:"" role:"" type:"" level:"s0:c30,c25" > > imagePullSecrets:<name:"default-dockercfg-h6qqs" > hostname:"" subdomain:"" schedulerName:"default-scheduler" priorityClassName:"" priority:0 > status:<phase:"Pending" conditions:<type:"PodScheduled" status:"True" lastProbeTime:<> lastTransitionTime:<seconds:1588743305 nanos:0 > reason:"" message:"" > message:"" reason:"" hostIP:"" podIP:"" qosClass:"BestEffort" 11:"" > 
2020-05-06T05:35:47.322Z	DEBUG	[kubernetes]	kubernetes/kubernetes.go:111	Watcher Pod update: metadata:<name:"db-backup-scheduler-5475b6885b-xzsvx" generateName:"db-backup-scheduler-5475b6885b-" namespace:"vz1-db-backup" selfLink:"/api/v1/namespaces/vz1-db-backup/pods/db-backup-scheduler-5475b6885b-xzsvx" uid:"56e4d790-8f5b-11ea-a4e2-fa163ec2dc37" resourceVersion:"5612201" generation:0 creationTimestamp:<seconds:1588743305 nanos:0 > labels:<key:"app" value:"db-backup-scheduler" > labels:<key:"pod-template-hash" value:"1031624416" > annotations:<key:"openshift.io/scc" value:"anyuid" > ownerReferences:<apiVersion:"apps/v1" kind:"ReplicaSet" name:"db-backup-scheduler-5475b6885b" uid:"56e10731-8f5b-11ea-a4e2-fa163ec2dc37" controller:true blockOwnerDeletion:true > clusterName:"" > spec:<volumes:<name:"default-token-dnmkl" volumeSource:<secret:<secretName:"default-token-dnmkl" defaultMode:420 > > > containers:<name:"db-backup-scheduler" image:"fnc-docker-reg:3000/scheduler:RL1.2.0" workingDir:"" ports:<name:"" hostPort:0 containerPort:8002 protocol:"TCP" hostIP:"" > resources:<> volumeMounts:<name:"default-token-dnmkl" readOnly:true mountPath:"/var/run/secrets/kubernetes.io/serviceaccount" subPath:"" > terminationMessagePath:"/dev/termination-log" terminationMessagePolicy:"File" imagePullPolicy:"IfNotPresent" securityContext:<capabilities:<drop:"MKNOD" > > stdin:false stdinOnce:false tty:false > restartPolicy:"Always" terminationGracePeriodSeconds:30 dnsPolicy:"ClusterFirst" nodeSelector:<key:"node-role.kubernetes.io/compute" value:"true" > serviceAccountName:"default" serviceAccount:"default" nodeName:"node3.novalocal" hostNetwork:false hostPID:false hostIPC:false securityContext:<seLinuxOptions:<user:"" role:"" type:"" level:"s0:c30,c25" > > imagePullSecrets:<name:"default-dockercfg-h6qqs" > hostname:"" subdomain:"" schedulerName:"default-scheduler" priorityClassName:"" priority:0 > status:<phase:"Pending" conditions:<type:"Initialized" status:"True" lastProbeTime:<> lastTransitionTime:<seconds:1588743347 nanos:0 > reason:"" message:"" > conditions:<type:"Ready" status:"False" lastProbeTime:<> lastTransitionTime:<seconds:1588743347 nanos:0 > reason:"ContainersNotReady" message:"containers with unready status: [db-backup-scheduler]" > conditions:<type:"ContainersReady" status:"False" lastProbeTime:<> lastTransitionTime:<> reason:"ContainersNotReady" message:"containers with unready status: [db-backup-scheduler]" > conditions:<type:"PodScheduled" status:"True" lastProbeTime:<> lastTransitionTime:<seconds:1588743305 nanos:0 > reason:"" message:"" > message:"" reason:"" hostIP:"167.254.204.61" podIP:"" startTime:<seconds:1588743347 nanos:0 > containerStatuses:<name:"db-backup-scheduler" state:<waiting:<reason:"ContainerCreating" message:"" > > lastState:<> ready:false restartCount:0

Please let me know if I can upload the complete logfile.

I don't think these logs are really helpful. We need to see if autodiscover is able to identify pods and start collecting from them. This is why we need the complete output in debug mode. You can use a pastebin tool to post it there and share the link here.

In addition, I suspect that maybe there is something wrong with the configuration of Filebeat and in this I would try to debug the configuration by adding stuff step by step starting from a very basic configuration that works.

Thanks for your suggestion about pastebin.

Please check the link for filebeat log -

https://pastebin.com/jV1vMaJd

I have found the Permission denied error from the filebeat logs.

Also I am unable to access files under "/var/lib/docker/containers/{container-id}" path.

l have tried both "priviliged" & "anyuid" policies but nothing works. Please let me know the RBAC I can apply to resolve this issue.

Hey!

Here is a manifest that is tested on Openshift too: https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml

Please have a look and verify you use the proper configuration options.

Also here is the documentation regarding running Filebeat on Openshift: https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html#_red_hat_openshift_configuration

Let me know if that helps!

The Permission Denied is resolved by setting the se status to permissive by using the following,

sudo setenforce Permissive

Then the logs are successfully syncing with ELK. Thanks Chris Mark for your time and help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.