In my environment i got around 6-7 applications. These applications logs around 30-40 lines per second, it's few GB per day. Filebeat can't keep up with parsing logs to send them to elasticsearch (via Logstash). I tried to increase speed of filebeat by adding additional flags without success.
My filebeat version is:
filebeat version 7.3.1 (amd64), libbeat 7.3.1 [a4be71b90ce3e3b8213b616adfcd9e455513da45 built 2019-08-19 19:30:50 +0000 UTC]
At that throughput level it sounds unlikely that Filebeat is the bottleneck. Filebeat can only send as fast as Logstash and downstream systems can accept. How have you determined that Filebeat is the bottleneck?
In field 'message' i saw that timestamp from my log is much older than real time, with the time the distance between two times were getting bigger. Example log:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.