I have a question about the filebeat processors (extract_array, drop_event, drop_fields).
My filebeat agent collects about 2500 logs lines a second. Do you think that using these processors can lead to huge CPU usage ? And if it's possible, what about delegate these operations to logstash ?
Hi Leandro,
I agree with you to do data tranformation in logstash. My filebeat process sends logs to a kafka server and then logstash consume the kafka messages. I'm facing a huge CPU usage when filebeat collects logs. So i'm gonna try to move filebeat data transformation in logstash and i will see if this decrease the CPU usage.
Thx for your help.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.