Need help updating syslog conf and grok filter


I am setting up my first ELK server. My first goal is to ingest firewall syslogs from a Mikrotik router. Second is to add other syslog sources. I have gotten to a part where I am stuck and am going in circles.

I have a syslog that is partially parsed, but I need the rest of the "message" parsed. When I make changes to the 03syslog.conf located in the conf.d directory. I see no changes to the output. I cannot tell if logstash is applying my changes OR it applying them but the filter parsing is incorrect.

I am able to run the following command to get stdout from the command line.

 usr/share# bin/logstash -f /etc/logstash/conf.d/03syslog.conf 

I get the following output -

           "message" => "input: in:ether2WAN out:(unknown 0), src-mac 00:26:cb:d5:d6:d9, proto TCP (RST),>, len 40",
    "severity_label" => "Informational",
    "facility_label" => "system",
           "program" => "firewall",
          "severity" => 6,
              "host" => "",
         "timestamp" => "Feb 16 09:30:58",
         "logsource" => "MikroTik",
        "@timestamp" => 2019-02-16T09:30:58.000Z,
              "type" => "syslog",
          "facility" => 3,
          "@version" => "1",
          "priority" => 30,
              "tags" => [
        [0] "_grokparsefailure"

I am trying to further parse the message output, I have been able to successfully parse using the grok tool -

When I update the my config file 03syslog.conf to further parse the message, I do not see a change in the output. Below is the file:

input {
  syslog {
    port => 5000
    id => "syslog server"
    type => "syslog"

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program},%{NOTSPACE:direction}:%{DATA:src_zone} out:%{DATA:dst_zone}, src-mac %{MAC:src_mac}, proto %{DATA:proto}%{SPACE}$
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
 stdout {codec => rubydebug}


I have the logstash.yml file set for config.reload.automatic = true.
Is this the correct approach for getting filtering correct? Is there a better way to work getting a configuration working?
Thank you.


In the rubydebug output you can see that

"message" => "input: in:ether2WAN out:(unknown 0), src-mac 00:26:cb:d5:d6:d9, proto TCP (RST),>, len 40"

The syslog input has received an RFC3164 encoded message and parsed off the timestamp, hostname, priority, etc. itself. You are just left with the body of the message. So you can change your grok to

match => { "message" => "%{NOTSPACE:direction}:%{DATA:src_zone} out:%{DATA:dst_zone}, src-mac %{MAC:src_mac}, proto %{DATA:proto}%{SPACE}$" }

Running with automatic reload enabled as you tune your filters is a good approach.


Thanks, that is very helpful. Let me update.


Thanks, I am now parsing...yay!

Question, when I run logstash with the config, then stop the stdout, is there any issue running the below command again?
Is there a command to just view stdout? Thanks.

./logstash -f /etc/logstash/conf.d/03syslog.conf 


Ok I noticed my UDP firewall logs fail parsing.

It looks like TCP traffic always has an arp (ACK,FIN) following PROTO while UDP does not. Examples below.
Does anyone have a recommendation on how to handle a syslog that contains an additional field occasionally?


forward: in:vlan30Razzor out:ether2WAN, src-mac 88:de:a9:98:63:7e, proto UDP,>, len 40

TCP (contains arp)

forward: in:vlan10Raven out:ether2WAN, src-mac 00:18:61:30:af:c9, proto TCP (ACK,FIN),>, len 52

match => { "message" => "proto %{WORD:protocol}( \((?<tcpopts>[A-Z,]+)\))?, %{IPV4:ip}" }

A field surrounded by ( and )? is optional.

(system) closed #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.