Hi,
i am sending our production logs to logstash cluster which has 6 server in cluster. In my filebeat setting i have number of workers as 2 . So 12 pipelines to logstash cluster each prod server.
But sometimes due to production traffic spikes our application logs gets flooded and get very big in size in few minutes. I dont want to crash my logstash cluster as it is used to parse other applications logs too.
I am continuously tailing my log file .
Is there a way that i can restrict limit on sending logs at a time and filebeat keeps sending data in this kind of situations.
As logstash is the server and you want to protect logstash from being overloaded by logs, You should consider to apply QoS rules on the Logstash server. With network or the server having rate limiting in place, this will create back-pressure in filebeat and slow down filebeat. You might even consider a time-scheduled policy, reducing bandwidth for filebeat at peak times even more.
Furthermore, if you add more filebeat instance to your environment, the overall bandwidth used will not change (you don't have to re-balance all filebeat instances). Removing a filebeat instance or having a machine down for maintenance frees up bandwidth for other beats to use.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.