Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.

ELK stack versions:
FIlebeat - 6.4.1
Logstash - 6.3.1
elastic - 6.5.4 &
kibana - 6.5.4

Please find the template for the same,

apiVersion: v1
kind: Template
metadata:
  name: logstash-filebeat
  annotations:
    description: logstash and filebeat template for openshift (version 6.3.1/6.4.1)
    tags: log,storage,data,visualization
objects:
- apiVersion: v1
  kind: SecurityContextConstraints
  metadata:
    name: hostpath
  allowPrivilegedContainer: true
  allowHostDirVolumePlugin: true
  runAsUser:
    type: RunAsAny
  seLinuxContext:
    type: RunAsAny
  fsGroup:
    type: RunAsAny
  supplementalGroups:
    type: RunAsAny
  users:
  - my-admin-user
  groups:
  - my-admin-group
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: logging-configmap
  data:
    logstash.yml: |
      http.host: "0.0.0.0"
      http.port: 5044
      path.config: /usr/share/logstash/pipeline
      pipeline.workers: 1
      pipeline.output.workers: 1
      xpack.monitoring.enabled: false
    logstash.conf: |
     input {
       beats {
         client_inactivity_timeout => 86400
         port => 5044
       }
     }
     filter {
       if "beats_input_codec_plain_applied" in [tags] {
         mutate {
           rename => ["log", "message"]
           add_tag => [ "DBBKUP", "kubernetes" ]
         }
         mutate {
             remove_tag => ["beats_input_codec_plain_applied"]
         }
         date {
           match => ["time", "ISO8601"]
           remove_field => ["time"]
         }
         grok {
             #match => { "source" => "/var/log/containers/%{DATA:pod_name}_%{DATA:namespace}_%{GREEDYDATA:container_name}-%{DATA:container_id}.log" }
             #remove_field => ["source"]
             match => { "message" => "%{TIMESTAMP_ISO8601:LogTimeStamp}%{SPACE}%{GREEDYDATA:Message}" }
             remove_field => ["message"]
             add_tag => ["DBBKUP"]
         }

         if "DBBKUP" in [tags] and "vz1-warrior-job" in [kubernetes][pod][name] {
           grok {
             match => { "message" => "%{GREEDYDATA:bkupLog}" }
             remove_field => ["message"]
             add_tag => ["WARJOBS"]
             remove_tag => ["DBBKUP"]
           }
         }
       }
     }

     output {
          elasticsearch {
             #hosts => "localhost:9200"
              hosts => "index.elastic:9200"
              manage_template => false
              index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
              #document_type => "%{[@metadata][type]}"
          }
     }
    filebeat.yml: |
      #filebeat.registry_file: /var/tmp/filebeat/filebeat_registry # store the registry on the host filesystem so it doesn't get lost when pods are stopped
      filebeat.autodiscover:
        providers:
          - type: kubernetes
            tags:
              - "kube-logs"
            templates:
              - condition:
                  or:
                    - contains:
                        kubernetes.pod.name: "db-backup-ne-mgmt"
                    - contains:
                        kubernetes.pod.name: "db-backup-list-manager"
                    - contains:
                        kubernetes.pod.name: "db-backup-scheduler"
                config:
                  - type: docker
                    containers.ids:
                      - "${data.kubernetes.container.id}"
                    multiline.pattern: '^[[:space:]]'
                    multiline.negate: false
                    multiline.match: after
      processors:
        - drop_event:
            when.or:
               - equals:
                   kubernetes.namespace: "kube-system"
               - equals:
                   kubernetes.namespace: "default"
               - equals:
                   kubernetes.namespace: "logging"
      output.logstash:
       hosts: ["logstash-service.logging:5044"]
       index: filebeat

      setup.template.name: "filebeat"
      setup.template.pattern: "filebeat-*"
    kibana.yml: |
     elasticsearch.url: "http://index.elastic:9200"
- apiVersion: v1
  kind: Service
  metadata:
    name: logstash-service
  spec:
    clusterIP:
    externalTrafficPolicy: Cluster
    ports:
    - nodePort: 31481
      port: 5044
      protocol: TCP
      targetPort: 5044
    selector:
      app: logstash
    sessionAffinity: None
    type: NodePort
  status:
    loadBalancer: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    labels:
      app: logstash
    name: logstash-deployment
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: logstash
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        labels:
          app: logstash
      spec:
        containers:
        - env:
          - name: ES_VERSION
            value: 2.4.6
          image: docker.elastic.co/logstash/logstash:6.3.1
          imagePullPolicy: IfNotPresent
          name: logstash
          ports:
          - containerPort: 5044
            protocol: TCP
          resources:
            limits:
              cpu: "1"
              memory: 4Gi
            requests:
              cpu: "1"
              memory: 4Gi
          volumeMounts:
          - mountPath: /usr/share/logstash/config
            name: config-volume
          - mountPath: /usr/share/logstash/pipeline
            name: logstash-pipeline-volume
        volumes:
        - configMap:
            items:
            - key: logstash.yml
              path: logstash.yml
            name: logging-configmap
          name: config-volume
        - configMap:
            items:
            - key: logstash.conf
              path: logstash.conf
            name: logging-configmap
          name: logstash-pipeline-volume
- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    labels:
      app: filebeat
    name: filebeat
  spec:
    selector:
      matchLabels:
        app: filebeat
    template:
      metadata:
        labels:
          app: filebeat
        name: filebeat
      spec:
        serviceAccountName: filebeat-serviceaccount
        containers:
        - args:
          - -e
          - -path.config
          - /usr/share/filebeat/config
          command:
          - /usr/share/filebeat/filebeat
          env:
          - name: LOGSTASH_HOSTS
            value: logstash-service:5044
          - name: LOG_LEVEL
            value: info
          - name: FILEBEAT_HOST
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          image: docker.elastic.co/beats/filebeat:6.4.1
          imagePullPolicy: IfNotPresent
          name: filebeat
          resources:
            limits:
              cpu: 500m
              memory: 4Gi
            requests:
              cpu: 500m
              memory: 4Gi
          volumeMounts:
          - mountPath: /usr/share/filebeat/config
            name: config-volume
          - mountPath: /var/log/hostlogs
            name: varlog
            readOnly: true
          - mountPath: /var/log/containers
            name: varlogcontainers
            readOnly: true
          - mountPath: /var/log/pods
            name: varlogpods
            readOnly: true
          - mountPath: /var/lib/docker/containers
            name: varlibdockercontainers
            readOnly: true
          - mountPath: /var/tmp/filebeat
            name: vartmp
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
          runAsUser: 0
          privileged: true
        tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
        volumes:
        - hostPath:
            path: /var/log
            type: ""
          name: varlog
        - hostPath:
            path: /var/tmp
            type: ""
          name: vartmp
        - hostPath:
            path: /var/log/containers
            type: ""
          name: varlogcontainers
        - hostPath:
            path: /var/log/pods
            type: ""
          name: varlogpods
        - hostPath:
            path: /var/lib/docker/containers
            type: ""
          name: varlibdockercontainers
        - configMap:
            items:
            - key: filebeat.yml
              path: filebeat.yml
            name: logging-configmap
          name: config-volume

- apiVersion: rbac.authorization.k8s.io/v1beta1
  kind: ClusterRoleBinding
  metadata:
    name: filebeat-clusterrolebinding
    namespace: logging
  subjects:
  - kind: ServiceAccount
    name: filebeat-serviceaccount
    namespace: logging
  roleRef:
    kind: ClusterRole
    name: filebeat-clusterrole
    apiGroup: rbac.authorization.k8s.io

- apiVersion: rbac.authorization.k8s.io/v1beta1
  kind: ClusterRole
  metadata:
    name: filebeat-clusterrole
    namespace: logging
  rules:
  - apiGroups: [""] # "" indicates the core API group
    resources:
    - namespaces
    - pods
    verbs:
    - get
    - watch
    - list

- apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: filebeat-serviceaccount
    namespace: logging

The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.

So I found that from elastic forums that the following iptables command would resolve my issue but no luck,

iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10

Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.

NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.

Hi!

So Logstash is reachable from Filebeat pod but Filebeat is not sending data?

Could you start Filebeat in debug level and check if there is anything problematic in the logs?

Thanks for sharing the logs! You are correct that there is nothing showing an error. However in the last log I see that only one event is reported ("writes":{"success":1,"total":1}}) which makes me think that logs are not collected.

We need to see a more verbose logging here, did you try to set log level and didn't work? Another option here would be to set the output to console and see if logs are collected: https://www.elastic.co/guide/en/beats/filebeat/current/console-output.html

I don't think these logs are really helpful. We need to see if autodiscover is able to identify pods and start collecting from them. This is why we need the complete output in debug mode. You can use a pastebin tool to post it there and share the link here.

In addition, I suspect that maybe there is something wrong with the configuration of Filebeat and in this I would try to debug the configuration by adding stuff step by step starting from a very basic configuration that works.

Hey!

Here is a manifest that is tested on Openshift too: https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml

Please have a look and verify you use the proper configuration options.

Also here is the documentation regarding running Filebeat on Openshift: https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html#_red_hat_openshift_configuration

Let me know if that helps!

The Permission Denied is resolved by setting the se status to permissive by using the following,

sudo setenforce Permissive

Then the logs are successfully syncing with ELK. Thanks Chris Mark for your time and help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.