Our pipeline is pretty straightforward. We have filebeats on all our servers which ship logs to a single server running ELK. Logstash processes all the logs, outputs them to Elasticsearch and then right at the end of the pipeline we are using the syslog output filter to send certain tagged logs off to a 3rd party syslog server.
This had been working great up until recently however we are starting to get into our peak season for traffic and our log data that we are shipping to this 3rd party has gone significantly up over the past week or so. We've noticed that this is causing major delays for our pipeline as far as getting all logs into Elasticsearch. I'm not sure if all output plugins share a queue and because the syslog output plugin is backing up it's also causing the Elasticsearch output to back up as well?
I've been looking into some other options to work around this, maybe using a kafka queue or perhaps a rsyslog server to do the queuing so that logstash doesn't have to do it but I first wanted to make sure I wasn't missing something easy that could be fixed with our existing pipeline.
I'm loading this logstash config at the end of our pipeline:
output { if [partner_name] == "Partner" { syslog { host => "10.10.10.10" port => "514" protocol => "tcp" } } }