How well does the syslog output module handle queuing if the destination host goes down temporarily? Or what is the best way to send logs to a 3rd party?


(Daniel J Finn) #1

We've recently been asked to ship a portion of our logs to a client at their datacenter. We are currently using the syslog output module to send logs from logstash to another internal device at the end of our pipeline.

I'm thinking that the easiest way to get the logs to this 3rd party may be to use the syslog output module to send them the requested logs but I have a couple of questions about this.

  1. is it possible to encrypt the transfer of these logs using the syslog output module?

  2. if we lose network connectivity to the destination syslog host, can we trust the syslog output module to queue these logs and if so for how long?

  3. is there a better way to do this?

Thanks,
Dan


(João Duarte) #2

the logstash-output-syslog supports ssl and it retries if it fails to send the message

as for options, it depends on what the client is able to receive, if it's an http endpoint, there's the http output (also supports ssl), if it's plain data you can use a simple tcp socket through the tcp output (also supports ssl)


(Daniel J Finn) #3

thanks @jsvd. Do you know how well it handles queuing. Other than just testing it to see what happens I'm wondering how it will handle the 3rd party endpoint going down for a few hours to a day?


(João Duarte) #4

@dfinn it will retry indefinitely, which means it will apply backpressure to the logstash inputs, since the internal queue will be filled, if you're using the in-memory queue.

If you enable persistent queues in logstash it will be able to continue receiving events until the capacity levels configured for the PQ are hit, then it will apply backpressure to the inputs.


(Jordan Sissel) #5

What @jsvd says is right, it will retry indefinitely.

However, TCP and UDP syslog streams are both vulnerable to data loss, so when a connectivity problem occurs, even with TCP, some of the last-sent data can be lost and Logstash can do nothing to prevent this (it's a problem with TCP).

That said, when the plugin detects a failure, it will retry-until-successful sending the last transmission.


(Daniel J Finn) #6

Excellent, thanks for the feedback. Sounds like this may work for us. We'll have to do some testing on what happens if the endpoint goes down for an extended period of time. We don't expect this to happen but this is the first time we have a requirement to ship logs to a 3rd party so it's new to us.


(Daniel J Finn) #7

@jvsd or @jordansissel, just getting back to this now and had one other question regarding queueing. Without enabling PQs, what is the size of the in memory queue? I'm trying to determine whether that will be good enough for us or if we need to look into enabling PQ.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.