Logstash mirror output

Hi,

Is there a way to mirror a logstash input to 2 or more outputs ?
I added 2 outputs to my winlogbeat.yml but only seems to take the first.

Trying to send one pipeline to ES and a copy to another output.

Thanks.

Looks like the answer to this may be unsupported by logstash but solved by adding "redis" ?

Is there a way to mirror a logstash input to 2 or more outputs ?

Logstash sends all events to all outputs listed in its configuration, i.e. "mirroring" is the default behavior.

I added 2 outputs to my winlogbeat.yml but only seems to take the first.

Well, Winlogbeat is a whole different story than Logstash. But yes, I think the Beats programs only send their data to one output.

This seems to be an option but I don't see any docs bout it.
Seems you use one output then plugins needed for that output.

output {
elasticsearch {
hosts => "192.168.1.10:9200"
sniffing => false
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
file {
path => "/opt/syslog-ng/logs/logstash/%{host}-%{+YYYY-MM-dd}.json"
codec => json { format => "custom format: %{message}"}

codec => json

}
}

This seems to be an option but I don't see any docs bout it.

I don't know where or if it's documented.

Seems you use one output then plugins needed for that output.

No, that's not how it works. Unless you use conditionals, events from all inputs are routed to all outputs via all filters.

That's what I think I want.
All beats input is written in json output for archiving and replaying and a copy sent to ES .
What's strange is the docs say "By default, this output writes one event per line in json format. You can customize the line format using the line codec like" but whatever codec I use it drops the json and starts printing plain text.

output {
file {
path => "/opt/syslog-ng/logs/logstash/%{host}-%{+YYYY-MM-dd}.json"

codec => json

codec => line { format => "custom format: %{message}"}

}

elasticsearch {
hosts => "192.168.1.10:9200"
sniffing => false
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}

}

The default codec works for me:

$ cat test.config 
input { stdin { } }
output {
  file {
    path => "/tmp/file-output"
  }
}
$ echo hello | /opt/logstash/bin/logstash -f test.config
Settings: Default pipeline workers: 8
Pipeline main started
Pipeline main has been shutdown
stopping pipeline {:id=>"main"}
$ cat /tmp/file-output 
{"message":"hello","@version":"1","@timestamp":"2017-05-11T05:37:27.210Z","host":"lnxolofon"}

Yea but if the default is json then why does the codec => json remove the json on the output ?

Ideally I wanted to send my output to ES and TCP but I can't seem to get the json codec to wok in a TCP output so I started using file to test.

Yea but if the default is json then why does the codec => json remove the json on the output ?

I don't see that it does.

$ cat test.config 
input { stdin { } }
output {
  file {
    path => "/tmp/file-output"
    codec => json
  }
}
$ echo hello | /opt/logstash/bin/logstash -f test.config
Settings: Default pipeline workers: 8
Pipeline main started
Pipeline main has been shutdown
stopping pipeline {:id=>"main"}
$ cat /tmp/file-output
{"message":"hello","@version":"1","@timestamp":"2017-05-11T07:30:38.577Z","host":"lnxolofon"}

(Note that you probably want to use the json_lines codec with the file output to get each line terminated by a newline character.)

This thread might get more interesting if you show what you get instead of describing the results. A recipe for reproducing the unwanted behavior (like above) is often useful.

Would that work the same way in a TCP output ?

Yes. But why ask hypothetical questions when you can try it out?

It's not hypothetical I have been trying to use TCP forwards with no luck. Wanted to see if I get any readable output from logstash in a format that could be imported. (Ditch effort after days of failing TCP output)

For syslog I have syslog-ng sending a copy of data to ES,SPLUNK,FileArchive
host -TCP/UDP:514 ---> rsyslog relay---TCP:514--> syslog-ng --> ES,SPLUNKForwaderInput,FileArchive

Trying to split beats data for the same destinations. I have Logstash running on the remote rsyslog relays and the syslog-ng server.

I can forward TCP data to either of them but the data does not seem to bet getting sent or ingested correctly.
Looks like a huge JSON blob with no separation in events.

Did you try using the json_lines codec?

Yep that seems to help. There seemed to be a cache buildup in LS which when it flushed the recieving parser could not pickup the stream broken up between packets. Added it to the TCP output.
tcp {
host => "192.168.1.16"
port => "5140"
mode => "client"
codec => "json_lines"
}
# SYSLOG-NG is listening on PORT 5140.
source s_BEATS {network(port(5140) log-msg-size(65536) flags(no-parse));};
destination d_jfile { file("/opt/syslog-ng/logs/$HOST_FROM-$R_HOUR.json" template("$(format-json --scope dot-nv-pairs)\n"));};
log { source(s_BEATS); parser(p_jsoneventv0); destination (d_jfile); };

This is what I am getting from syslog-ng trying to read the input.
Using this parser for the json.
https://www.balabit.com/documents/syslog-ng-ose-3.9-guides/en/syslog-ng-ose-guide-admin/html-single/index.html#json-parser-options

[2017-05-11T21:22:17.436365] Incoming log entry; line='{"scheme":"http","ip":"192.168.1.16","tcp_connect_rtt":{"us":9000},"monitor":"http@http://192.168.1.16:9200","type":"http","http_rtt":{"us":13000},"url":"http://192.168.1.16:9200","tags":["beats_input_raw_event"],"duration":{"us":22001},"@timestamp":"2017-05-12T01:22:21.258Z","rtt":{"us":22001},"port":9200,"response":{"status":200},"beat":{"hostname":"TYLER-LAPTOP","name":"TYLER-LAPTOP","version":"5.4.0"},"@version":"1","host":"TYLER-LAPTOP","up":true}'
[2017-05-11T21:22:17.436497] Error extracting JSON members into LogMessage as the top-level JSON object is not an object; input='{"scheme":"http","ip":"192.168.1.16","tcp_connect_rtt":{"us":9000},"monitor":"http@http://192.168.1.16:9200","type":"http","http_rtt":{"us":13000},"url":"http://192.168.1.16:9200","tags":["beats_input_raw_event"],"duration":{"us":22001},"@timestamp":"2017-05-12T01:22:21.258Z","rtt":{"us":22001},"port":9200,"response":{"status":200},"beat":{"hostname":"TYLER-LAPTOP","name":"TYLER-LAPTOP","version":"5.4.0"},"@version":"1","host":"TYLER-LAPTOP","up":true}'
[2017-05-11T21:22:17.436523] Message parsing complete; result='0', rule='p_jsoneventv0', location='/etc/syslog-ng/syslog-ng.conf:18:14'

SUCCESS ! Working in the lab.
Update, I switched the syslog-ng parser from json to kv-pairs. If I knew more json RFC I'd probably know why.

/etc/logstash/conf.d/beats-logstash.conf
input {
beats {
port => 5044
}
}

output {
file {
path => "/opt/syslog-ng/logstash/%{host}-%{+YYYY-MM-dd}.json"
}

tcp {
host => "192.168.1.16"
port => "5140"
mode => "client"
codec => "json_lines"
}

/etc/syslog-ng/syslog-ng.conf

syslog-ng listens on port 5140 for beats output.

source s_BEATS {network(port(5140) log-msg-size(65536) flags(no-parse));};
destination d_jfile { file("/opt/syslog-ng/logs/$HOST_FROM-$R_HOUR.json"); };
log { source(s_BEATS); parser {kv-parser();}; destination (d_jfile); };

DEBUG Output from sysog-ng
/usr/sbin/syslog-ng --debug -F -f /etc/syslog-ng/syslog-ng.conf
[2017-05-11T22:03:21.454954] Incoming log entry; line='{"scheme":"http","ip":"192.168.1.16","tcp_connect_rtt":{"us":2000},"monitor":"http@http://192.168.1.16:9200","type":"http","http_rtt":{"us":2000},"url":"http://192.168.1.16:9200","tags":["beats_input_raw_event"],"duration":{"us":5000},"@timestamp":"2017-05-12T02:03:25.258Z","rtt":{"us":5000},"port":9200,"response":{"status":200},"beat":{"hostname":"TYLER-LAPTOP","name":"TYLER-LAPTOP","version":"5.4.0"},"@version":"1","host":"TYLER-LAPTOP","up":true}'
[2017-05-11T22:03:21.455021] Message parsing complete; result='1'
[2017-05-11T22:03:21.455077] Outgoing message; message='May 11 22:03:21 hal {"scheme":"http","ip":"192.168.1.16","tcp_connect_rtt":{"us":2000},"monitor":"http@http://192.168.1.16:9200","type":"http","http_rtt":{"us":2000},"url":"http://192.168.1.16:9200","tags":["beats_input_raw_event"],"duration":{"us":5000},"@timestamp":"2017-05-12T02:03:25.258Z","rtt":{"us":5000},"port":9200,"response":{"status":200},"beat":{"hostname":"TYLER-LAPTOP","name":"TYLER-LAPTOP","version":"5.4.0"},"@version":"1","host":"TYLER-LAPTOP","up":true}'

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.