Upper limits on Filebeat harvesting

I have a system which produces separate log files of all RMI calls and SQL executions. This is very useful for tracing of client requests. The loggers are set to roll the file every 10mb, which equals 36,000 lines on average. During busy periods, these files can roll up to 10 times per minute, meaning up to 360,000 documents a minute (6,000 per second) that need to be harvested and sent to logstash and then elastic, just from that file path.

I'm struggling to configure Filebeat in such a way as to keep up with this. and I can't actually find any information on what the reasonable upper limits of Filebeat are. I'm currently only getting 200-300 events per second sent from filebeat to logstash across all harvested files. The limiting factor seems to be filebeat keeping up with the rapidly rotating logs.

As I'm not clear what the upper limit on Filebeat harvesting is, I'm not sure if I should be putting effort in to tuning filebeat or if I would need to look at a different approach.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.