Filebeat and busy files

That's a good idea. In fact I have logstash splitted to three layers: cache, index and writer with redis queues between. I made another two tests:

  1. turn off filebeat on all nodes in env except one, where I set prospector for this single log. Events are not sent quicker than before. After turning on on all hosts, filebeat floods logstash cache instances with old logs (reported here: Filebeat sends old logs after reboot), redis queues were growing, so logstash cache instances can handle much bigger load than normal (single instance about 10k events/s)

  2. set output in filebeat to file only - now filebeat works perfectly, 5700 events/s. In fact I have many java apps on this machine, and load is ~30, but CPU usage is about 50%. There is also working beaver, which send logs to other logstash stack (I'm testing filebeat with new logstash configuration) and there is no such problems. On the other hand logs from syslog or other "normal" services are sent by filebeat normally, but there are no as many events in log file. It's not a problem with those java apps, because I have VM which is a reverse-proxy server with nginx only, load is not so big, and there is similar situation with nginx access log.