I have a Linux server running Filebeat 8.3.3 taking Netflow as input and writing it out to ES on another server. As the Netflow load is much more than what Filebeat can handle, I'm thinking of splitting the Netflow input into 2, i.e. input via 2 physical ports on my Netflow server, then run 2 instances of Filebeat, one for each port, and having both write to the same ES.
Is this feasible? If so, how should I go about configuring it? If not, is there a better way?
I think you can configure multiple instances of the filebeat running on different ports and sending data to the same.As for the netflow you can use a load balancer which will distribute the data across these filebeat.
You can also try optimizing the filebeat
When you say "different ports", do you mean different UDP ports in the netflow packets? The packets are all sending to the same UDP port as the port that filebeat is listening on. Is there a way to "tie" different filebeat instances to different physical ports?
Yes, I've also tried to optimize filebeat, and the best I can do thus far is 13K records indexed on ES per second, but that's still not enough to handle the load.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.