# Filebeat too quick on recovering data

Hello.
I am running ELK 6.6 on a CentOS 7 box. I have filebeat configured on a Windows machine to forward specific logs to Logstash.
The problem I have is that my Logstash filter has a throttling mechanism setup as to not overwhelm the ELK box when live data is being shipped on a production system. However, if (and this happened) filebeat was down, then when restarting it is expected to ship the old files that it hadn't. Filebeat does do this but the problem is that it processes the logfile very quickly ( ~ 3280 KB) and is all shipped within the throttling period which causes it to drop the majority of the logs after the 1002nd. I would like to be able to control the rate at which the data is being sent and/or have a custom throttling rate for older logs (perhaps). Can either one of these 2 be done in a way that it doesn't affect my semi-live log shipping and processing filters?

My filebeat.yml file:

        filebeat.inputs:
- type: log
enabled: true
paths:
- c:\path\to\log\files

filebeat.config.modules:
path: \${path.config}/modules.d/*.yml

setup.template.settings:
index.number_of_shards: 3

name: Node
fields_under_root: true
fields:
env: dev
role: Node
node: Node

output.logstash:
# The Logstash hosts
hosts: ["127.0.0.1:2561"]
index: myindex


My logstash filter (the relevant part):

    filter {
throttle {
period => 30
max_age => 60
after_count => 1000
key => "%{host}"
}
if "throttled" in [tags] {
throttle {
period => 60
max_age => 120
after_count => 2
key => "%{host}"
}
}
if "drop" in [tags] {
drop { }
}


Any help would be greatly appreciated.
Thank you

Hi!

Unfortunately it does seem that Filebeat had such a mechanism so far. There is interesting related discussion at Throttling log output from Filebeat directly?

However configuring some network limits may be helpful in your case: https://www.elastic.co/guide/en/beats/filebeat/master/bandwidth-throttling.html

C.

1 Like

Hi Chris!