How to define what data is sent to Elastic from Logstash?

Hello,

I have ELK running on Ubuntu Server with the intention of it receiving Syslog from firewalls and Filebeat from DNS/DHCP/AD services.

The primary issue is the firewalls make a lot of Syslog noise without there being a way to define which events or facilities get sent on the firewalls side. So I was hoping there was a way to parse the Syslog data in a pipeline.conf file but I'm not sure what syntax I should be using or how to define what it is I would like to ingest. Everything I've tried on my own or copy/pasted has broken ingestion and nothing shows up in Kibana.

Ideally, I just want to drop all of the local performance monitoring. That leaves traffic and VPN information and critical alerts for intrusions.

Hi, you can drop events using different ways,

may i suggest you to forward you event data using filebeat and drop event based on regex of the message this could help you to not send too many useless data to logstash.

Then based on conditionnal fields value you can also drop events using logstash

 filter {
      if [loglevel] == "debug" {
        drop { }
      }
    }
1 Like

Where would Filebeat reside in the case of Syslog coming from the Firewalls? I can't install Filebeat on the firewall the same way I have it installed on my domain controller.

I will research the drop filter to see if I can apply it to the FW's syslog messages coming into Logstash.

I use a syslog server to centralize logs coming from all kind of sources for archive and security purposes, you can simply forward the logs to a filebeat server listening for syslog and forward them to logstash, otherwise you could just simply drop them through logstash using the drop filter

1 Like

I didn't even consider a Syslog server! Good suggestion. I will try that direction next week. Thanks!

So I have a rsyslog server configured and is receiving all of my Firewall syslog information. The issue at this point is all the "junk" data. it produces a bunch of hardware events that I really don't care for, for instance. Where do I begin to parse out those logs to be forwarded to logstash and ultimately viewed in Kibana?

Should I configure FileBeat on my rsyslog server or set up a separate device dedicated to filebeat? If I did that, is there a way I could leverage the FileBeat server to centralize other FileBeat equipped devices?

edit: "parse", I realized, isn't the word I'd like to use. Filtering out the information I don't want is a better way to put it.

Hi,

You could use the drop filter on filebeat in order to avoid sending trash logs into your stack.

The best option is always to get rid of useless logs the earliest possible. Try configuring your firewall before they send to syslog, you could also reidrect from syslog to /dev/null or drop them through filebeat.

Gotcha. I will look at these options.

Unfortunately, the Firewall doesn't give any control over what's forwarded via syslog. It's either all or nothing. At least from what I'm aware of. There's no documentation on configuring which programs or facilities get sent.

Thanks for the tips.

You'll have to do this by regex then, using logstash filter or filebeat processors :wink:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.