I have a Filebeat which is processing 10-20GB nginx log file. I can't seem to process >500 ouput/s (in measurement of over a 6hours period) to Logstash. I've read from existing topics to tune with parameters such as filebeat.spool_size, worker, bulk_max_size and pipelining. However the performance is still way below my expectation to process at least 1k of output/s.
My Filebeat version is 5.6.3 and Logstash is 5.6.2, both Filebeat and Logstash are sitting in different nodes, but in local LAN. My Filebeat conf is as below:
Can someone please point me to the right direction? Would appreciate any help here.