Good day everyone. One server have 2 containers with filebeat 6.4. This two containers read logs and generate over 10.000 events per second => leads some troubles:
- server contain other containers which generate some LA. Filebeat containers create heightened LA, server becomes slow
- if containers with filebeat stopped, for example, for one hour, than start him => from this time many logs created, filebeat is trying to reach end of log file => generate more events/sec => elasticsearch cluster is overload
- if elasticsearch cluster is overload (I know that is bad))), filebeat lag (it try send 10.000 events/sec, but cluster can receive only 6000 events/sec), when elasticsearch cluster not overload => filebeat also
trying to reach end of log file => generate more events/sec => elasticsearch cluster again is overload.
Our stack - filebeat read logs => logsash => elasticsearch (6.4).
I know that we can limit IO in docker containers (but in our case, we use old saltstack where no states for limit IO in containers).
May be filebeat has internal variables for limiting events per/second or something else?