Filebeat performance is low after upgraded to 7.9.3

Hi,

I am using Filebeat to read all the application logs in my openshift cluster. The filebeat sends the logs to the logstash for indexing and then to Elasticsearch and finally visible on Kibana.

Using 3 filebeat pods (pod per each worker node), 1 logstash pod, 1 elasticsearch pod & 1 kibana pod.
Resources Allocated per each filebeat pod - 6 GB RAM & 6 core CPU

After I upgrade the stack (ELK & filebeat) from version 6.5.4 to 7.9.3, filebeat became very slow and to read around 3 lakh records it took more than 60 minutes which is slower than 6.5.4.

contents of filebeat.yml -

  filebeat.autodiscover:
    providers:
      - type: kubernetes
        host: ${NODE_NAME}
        tags:
          - "kube-logs"
        templates:
          - condition:
              or:
                - contains:
                    kubernetes.pod.name: "ne-mgmt"
                - contains:
                    kubernetes.pod.name: "list-manager"
                - contains:
                    kubernetes.pod.name: "scheduler-mgmt"
                - contains:
                    kubernetes.pod.name: "sync-ne"
                - contains:
                    kubernetes.pod.name: "file-manager"
                - contains:
                    kubernetes.pod.name: "dash-board"
                - contains:
                    kubernetes.pod.name: "ne-db-manager"
                - contains:
                    kubernetes.pod.name: "config-manager"
                - contains:
                    kubernetes.pod.name: "report-manager"
                - contains:
                    kubernetes.pod.name: "clean-backup"
                - contains:
                    kubernetes.pod.name: "warrior"
                - contains:
                    kubernetes.pod.name: "ne-backup"
                - contains:
                    kubernetes.pod.name: "ne-restore"
            config:
              - type: docker
                containers.ids:
                  - "${data.kubernetes.container.id}"
                multiline.pattern: '^[[:space:]]'
                multiline.negate: false
                multiline.match: after
  logging.level: debug
  processors:
    - drop_event:
        when.or:
           - equals:
               kubernetes.namespace: "kube-system"
           - equals:
               kubernetes.namespace: "default"
           - equals:
               kubernetes.namespace: "logging"
  output.logstash:
    hosts: ["logstash-service.logging:5044"]
    index: filebeat
    pretty: true
  setup.template.name: "filebeat"
  setup.template.pattern: "filebeat-*"

Please let me know if there are any issues with the configuration and suggest me if there are parameters that I can tune filebeat for better performance.

Hi!

Since you use 7.9.3 now you can change docker input to container input. Also see beats/filebeat-kubernetes.yaml at ac60dcb7e4fb9634d3cd90ce7e592020201edc21 · elastic/beats · GitHub.

In addition, do you see any suspicious log messages when running filebeat in debug mode?

C.

I am facing exactly the same connection issue mentioned in this link -

And the filebeat's performance went even low that it stops sending some of the events to logstash after around 20 hours of time when the number of logs generated are around 150 for every 5 - 10 mins duration.

Wel, 150 events per 5-10 mins duration is not high rate at all so I don't think it could be a back-pressure issue on Logstash. Could you confirm it's not a networking issue and maybe try the steps from Publishing to Logstash fails with "connection reset by peer" message | Filebeat Reference [7.11] | Elastic?

C.

Forgot to mention that there is no firewall running in my cluster. So I think the fix might not work in my case. But I will give a try making the ttl value less than 30s and setting the pipelining to 0.

Please suggest your comments.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.