Is there a way to intentionally rate limit a certain file (say, one of the matches in the log input pattern /var/log/all-my-services/*.log)?
The use case is that one service on a shared machines logs so much that it backs up either 1) filebeat/logstash or 2) causes bulk index request failures/retries. Either way, it'd have an adverse affect on the log collection of other services running on the same instance.
It is currently no possible to control the throughput of filebeat, there is an open issue for that https://github.com/elastic/beats/issues/3847.
In the meantime one thing you can try is to limit the number or processes running with
You can also try to limit the resources of the process using features of the operating system like cgroups.
Thanks for the info. Bummer that ticket is still open. Any idea on when it might get picked up?
Unfortunately cgroups/max_procs will limit filebeat entirely, when I just want to limit one specific noisy neighbor on the host machine. I appreciate the response though.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.