Logstash relay

i'm interested in setting up a logstash relay. i need to be able to listen for syslog and forward the messages via at least one other logstash server before hitting a group of logstash servers. do you think this is possible? If so, I'd be interested in some example config. Would you use one conf file or split them?

Does this match what you are trying to do?


I think so. So if i understand this correctly, i could use pipelines to send various inputs to one pipeline that would send data to another logstash. on this second node, I could again have one pipeline for listening to local sources, one for ingesting beats and one for sending downstream.

I know is should complex, but i have to deal with sending data from separate network zones.

What's the purpose of sending it to multiple places? Is this for load balancing?

At one point, I had a logstash collector node that forwarded to a kafka queue, then the kafka queue was picked up by two other logstash nodes. The processing part was split between the two nodes. Is this what you are trying to do?

We currently have a global logging framework using nifi (but not all logs follow the same path). We are moving away from nifi, their clusters in particular, but planning on keeping the same security principles which mean we need to daisy chain some components together in order to get data from one secure zone into another. We will be using Kafka as well, but mainly to assist with integrating with other 3rd party systems.

I think I understand. Could you setup a tcp (or UDP) input, then without processing, send using the TCP output to the logstash instance on the inside zone? If possible, it might be good to validate the data at this stage, in case your outer zone becomes compromised.

Data source --TCP/UDP--> Logstash DMZ --TCP--> Logstash Inside

Rough (untested config):

input { tcp { port => 12345 }}
output { 
   tcp {
       port=> 23456
       host =>
       codec => json

Thanks Ian.
while this offers a break in protocol, it also means (potentially) lots of ports open in the firewall.

I have got this working using pipelines, although i am seeing an error and when trying trying to send to 2 lumberjack outputs (for load balancing).

[ERROR][logstash.outputs.lumberjack][core_ls] Client write error, trying connect {:e=>#<IOError: Connection reset by peer>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:950:in `syswrite'"

This error only occurs when i have it configured to send to 2 hosts. when configured to one host (any of the pair), things work as expected with no errors.

As yet, I haven't seen a satisfactory answer to this issue online.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.