I had a setup working, using logstash with udp input and rabbitmq output, to consume a high rate of remote syslog messages and publish it into elastic search (with another logstash instance using rabbitmq as input, and output to elastic search). I found that java was using 3 cores at 100% to handle the load (though it was handling it).
I am trying to convert to using filebeat as the transport now, instead of rabbitmq, with the hope of filebeat being able to work with much lower performance hit.
So before I had:
syslog src -> udp 514 -> logstash (local) -> rabbitmq (AWS) -> logstash (AWS) -> ES (AWS)
And I am now trying to move to:
syslog src -> udp 514 -> rsyslog -> /var/log/file.log -> filebeat -> logstash (AWS) -> ES (AWS)
but I am finding that most messages are being dropped (likely due to the unnecessary file io). I also tried a pipe via mkfifo, but IIRC filebeat didn't load.
So the question is, is there any way for filebeat (or packetbeat if it can pull the message for that matter) to listen directly on port 514? This would then allow me to publish the syslog data w/o the file i/o overhead.