Filebeat - how limit events per second?

Good day everyone. One server have 2 containers with filebeat 6.4. This two containers read logs and generate over 10.000 events per second => leads some troubles:

  1. server contain other containers which generate some LA. Filebeat containers create heightened LA, server becomes slow
  2. if containers with filebeat stopped, for example, for one hour, than start him => from this time many logs created, filebeat is trying to reach end of log file => generate more events/sec => elasticsearch cluster is overload
  3. if elasticsearch cluster is overload (I know that is bad))), filebeat lag (it try send 10.000 events/sec, but cluster can receive only 6000 events/sec), when elasticsearch cluster not overload => filebeat also
    trying to reach end of log file => generate more events/sec => elasticsearch cluster again is overload.
    Our stack - filebeat read logs => logsash => elasticsearch (6.4).
    I know that we can limit IO in docker containers (but in our case, we use old saltstack where no states for limit IO in containers).
    May be filebeat has internal variables for limiting events per/second or something else?

I found this - filebeat FAQ, but this limit Network bandwidth

Filebeat currently does not support rate limiting. I think it would be better to apply some rate limiting rules (as is shown in the FAQ) to the server, as one server running filebeat does not know how the other server running filebeat is doing. As you use Logstash, you might also consider the throttle filter. This will create back-pressure, also slowing down filebeat.

By default the go runtime creates one OS thread per available logical core. To reduce max load of filebeat one can use cgroups, nice, and set max_procs: <n>. By setting max procs to one, only one active OS thread will be generated, reducing the max load filebeat can generate. A value of 1 or 2 is often enough.

If you see errors and retries when Elasticsearch becomes overloaded, then it is a good idea to reduce the batch size, so to reduce the maximum amount of events send at once. When Elasticsearch becomes overloaded due to large batch sizes, then random events will fail.

thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.