Gather Syslogs from a router

Hi,

How could I set up my router to send its Syslogs to Logstash?

I already have my ELK server set up completely and got a Client Server that is successfully sending its Syslogs to Logstash where I can visualize it in Kibana.

How would it work with a router? I've got a Juniper SRX210 at the moment. I assume I need to write another config file. Anyone who could help me out with this one?

http://kb.juniper.net/InfoCenter/index?page=content&id=KB16502&actp=search

Update,

I have told my router to send Syslogs to my server by entering the following line:
set system syslog host 10.0.0.2 any any on my Juniper router like the Knowledge Base suggested.

And tested if anything was coming in. Eth1 would be where my router is connected to and Eth0 is my server.

Something is coming in, that's for sure and I'm pretty sure it is the logs. How would I get Logstash to pick them up?

I'm working on Ubuntu 14.04.

See https://www.elastic.co/guide/en/logstash/current/config-examples.html#_processing_syslog_messages for an example.

1 Like

How do I determine which port the router is sending the syslogs through? I listened to 5000 and 514 with tcpdump, but nothing came in.

Edit: I used

tcpdump -i eth1 udp port 514
and noticed that the syslogs are being sent over there. I need to configure my logstash to listen to port 514 on network interface 1, correct?

I was listening to my default interface eth0, on the right port but it had to be eth1. How do I tell logstash to listen to that interface? I think I'm going to have to set eth1 as default.

By default Logstash should listen on all interfaces. You can verify what's being listened on with netstat -an.

Hi and thank you for your support,

I'm having an issue with logstash and the new config file, configtest returns me the following:
http://pastebin.com/0JjxnCjC

I assume this has something to do with the rubydebug codec that I specified in my config file? I tried to install some ruby packages following this https://www.brightbox.com/docs/ruby/ubuntu/ but that didn't work out.

Update:
I removed sniffing => true from a separate output file that I use with filebeat and now it's narrowed down to a single error code instead of a bunch:

Connection refused {:class=>"Manticore::SocketException", :level=>:error}
Configuration OK
user01@ubuntuS1:~$

Another update,
Elasticsearch was not running... Oops. Maybe I should do my own troubleshooting before I post here straight away. Will let you know on further updates on Syslog capturing on my router.

Maybe I should do my own troubleshooting before I post here straight away.

Yes, please.

Logs are coming in at eth1 port 514, but logstash is not able to bind to any ports lower than 1024 from what I understand.

I have to either redirect traffic on that port, or give logstash root access which is not the safest option but for the sake of testing if logstash and kibana can process these logs I want to give logstash root access so it can listen to port 514.

How can I do this?

If you're starting Logstash from your shell just start it as root. If you're starting it as a service, adjust the LS_USER option (or whatever it's called) in /etc/default/logstash (Debian-based) or /etc/sysconfig/logstash (RPM-based).

1 Like

That worked, thanks.

Logstash is currently running and listening to port 514, confirmed with netstat -nlup

user01@ubuntuS1:~$ netstat -nlup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
udp        0      0 0.0.0.0:58326           0.0.0.0:*                           -
udp        0      0 0.0.0.0:31732           0.0.0.0:*                           -
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -
udp6       0      0 :::21300                :::*                                -
udp6       0      0 :::31732                :::*                                -
udp6       0      0 :::514                  :::*                                -

Checked logstash log and noticed the following error:
http://pastebin.com/EQp8Evi1

By the way, would I need to make a new index in the config file for the syslogs? I have a filebeat one now that goes with the ClientServer and on Kibana its set as default, but I assume the router's syslogs won't go to the same index.

IOError: closed stream

Not sure why this happens.

By the way, would I need to make a new index in the config file for the syslogs?

Not necessarily, but it might be a good idea. The current recommendation from the Elastic folks seems to be to segregate different kind of logs into different index series but at least in my use cases that'll result in too many index series.

After two hours of trying to do all kinds of different things I decided to just reboot the server and the error is gone, alright I guess.

Can visualize them on Kibana as well now, thanks for your help.