My filebeat instance does send logs, but nowhere enough volume to keep up with the number of log entries that are being added to my log files.
The log files grow very quickly, like 2.7-ish Gigabytes per 24 hours.
Kafka shows I get "spurts" of 4096 logs sent to my logstash every 10 to 15 minutes whereas I expect I would see an almost continuous stream of logs.
When I first start filebeats on a fresh logfile with a fresh registry I get a burst of logs forwarded but shortly after it dies off to the aforementioned "spurts."
I have tried several combinations of worker and bulk_max_size values all the way up to 128 / 65535 to no real effect.
I've tried
queue.mem:
events: 8192
flush.min_events: 512
flush.timeout: 5s
and don't see any difference.
Here is my output section of my filebeat.yml:
############################# Output ##########################################
output:
logstash:
# The Logstash hosts
enabled: true
hosts: ["log.redacted.com:5516"]
worker: 128
bulk_max_size: 65535
protocol: https
# ttl: 3600s
ssl:
supported_protocols: [TLSv1.2]
I've tried it on 7.6.2 linux x86 64 as well as 7.11.1
Can anyone help with a configuration to try to help with throughput?
Thank you!
Bryan