My logstash configuration have almost 1500+ lines of code where i am taking input from beats and applying filters for extracting fields of various log types and output to kafka. Is there any way to handle huge configuration files or any best practices for logstash parsers?
can u please suggest any best practices for maintaining logstash parser confs where we have custom log patterns (other than usual standard log patterns provided by vendors).
with number of conf lines increased, we were facing latency for logs getting parsed. How to efficiently handle in this scenario? By writing multiple confs will improve performance? If so how to do this?
Have you looked into using the Logstash monitoring API to see which filters are adding the most time to the processing?
But as I said, reduce the amount of DATA and GREEDYDATA patterns. Don't spend time on other optimizations until your grok expressions have been cleaned up.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.