Filebeat 7.6.2_linux_x86_64 is not keeping up with the log entries added to .log files

My filebeat instance does send logs, but nowhere enough volume to keep up with the number of log entries that are being added to my log files.
The log files grow very quickly, like 2.7-ish Gigabytes per 24 hours.
Kafka shows I get "spurts" of 4096 logs sent to my logstash every 10 to 15 minutes whereas I expect I would see an almost continuous stream of logs.
When I first start filebeats on a fresh logfile with a fresh registry I get a burst of logs forwarded but shortly after it dies off to the aforementioned "spurts."

I have tried several combinations of worker and bulk_max_size values all the way up to 128 / 65535 to no real effect.
I've tried

  queue.mem:
     events: 8192
     flush.min_events: 512
     flush.timeout: 5s

and don't see any difference.

Here is my output section of my filebeat.yml:

############################# Output ##########################################
output:
  logstash:
    # The Logstash hosts
    enabled: true
    hosts: ["log.redacted.com:5516"]
    worker: 128
    bulk_max_size: 65535
    protocol: https
    # ttl: 3600s
    ssl:
      supported_protocols: [TLSv1.2]

I've tried it on 7.6.2 linux x86 64 as well as 7.11.1

Can anyone help with a configuration to try to help with throughput?
Thank you!
Bryan

Have you considered using the disk queue (I'm know you've to update the filebeat to a newer version)?

For reference: Configure the internal queue | Filebeat Reference [7.11] | Elastic

Hi @bjarvis Welcome to the community

I would say something else is going on.

Filebeat with defaults should easily be able to ship 2.7GB a day of logs, with a steady stream assuming they are coming in that way from tailing the logs. , assuming these are normalish logs. What kind of logs?

What is your architecture?

Log files -> Filebeat -> Logstash -> Kafka -> Elasticsearch

Your Filebeat settings look pretty strange / tweaked.. I would go back to all defaults. 128 workers probably not good.

Can you just try using Filebeats with all defaults and ship directly to elasticsearch? for a test?
Logs -> Filebeat -> Elasticsearch

Then we can try to debug the other stuff?

The initial burst is usually the backlog of logs, but it should settle into a stream.

Whats is the basic configuration of your Elasticsearch Cluster (Number of Nodes, Size in RAM / Disk)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.