Logstash grok filter for firewall traffic logs

First I have to mention that this is my first ELK experience.
I setup ELK 7.04 on Ubuntu 18.04.3 LTS and trying to ingest my gateway/firewall logs into logstash. filebeat is also setup on the box. I see the traffic logs are coming through on udp 514 and also see it when looking at my index in Kibana.

I have couple of issues:

1- even though I have my config file under config.d "/etc/logstash/conf.d/fw.conf" logstash does not call it when the service starts. I have to run it using "sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/fw.conf" command.

2- I cannot find the correct grok filter to break down the individual fields from the "message" field. I had done this in graylog with no issue, but cannot find the right format for logstash.

-the following are couple of events from the actual logs:

<173>ulogd[902]: Accepted IN=br-lan OUT=eth1 MAC=48:5d:36:22:58:bc:f4:f5:d8:bf:9a:a4:08:00 SRC=192.168.1.20 DST=8.8.8.8 LEN=84 TOS=00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=3487 SEQ=1 MARK=8000

<173>ulogd[902]: Blocked IN=eth1 OUT= MAC=48:5d:36:22:58:bd:84:b5:9c:a1:4b:c1:08:00 SRC=124.65.12.34 DST=x.x.x.x LEN=44 TOS=00 PREC=0x00 TTL=243 ID=19973 PROTO=TCP SPT=47004 DPT=1433 SEQ=315377175 ACK=0 WINDOW=1024 SYN URGP=0 MARK=0

================================

  • my fw.config file:

input {
udp {
port => 514
}
stdin {}
syslog {
port => 1514
}
}

filter {
grok {
match => ["message", "%{WORD} %{WORD:action} (IN=)%{WORD:in_interface} (OUT=)%{WORD:out_interface} (mac=%{MAC:src_mac})?()?(SRC=)%{IP:src_ip} (DST=)%{IP:dst_ip} (LEN=)%{WORD:len} (TOS=)%{WORD:tos} (PREC=)%{WORD:prec} (TTL=)%{INT:ttl} (ID=)%{INT:id} (PROTO=)%{WORD:protocol} (SPT=)%{INT:src_port} (DPT=)%{INT:dst_port} (SEQ=)%{INT:seq} (ACK=)%{INT:ack} (WINDOW=)%{INT:window}?()?"]
}
}

output {
stdout {}
elasticsearch {
hosts => ["localhost"]
index => "my_fw_index"
}
}

=============================

  • Kibana view

JSON

@timestamp Oct 20, 2019 @ 13:27:29.354
t @version 1
t _id kTU16m0B53Lvd6e-rYa7
t _index my_fw_index
# _score -
t _type _doc
t host 192.168.1.1
t message <173>ulogd[902]: Accepted IN=br-lan OUT=eth1 MAC=48:5d:36:22:58:bc:92:3b:ad:28:5b:6d:08:00 SRC=192.168.1.25 DST=x.x.x.x LEN=52 TOS=00 PREC=0x00 TTL=63 ID=21747 DF PROTO=TCP SPT=38682 DPT=443 SEQ=3425570077 ACK=2831798699 WINDOW=501 ACK URGP=0 MARK=8000
t tags _grokparsefailure

cannot someone help me with these issues?

Thanks,

I would not use grok

    dissect { mapping => { "message" => "<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}" } }
    kv { source => "[@metadata][restOfLine]" whitespace => strict }

For the other question, what is path.config set to? What does the logfile look like during startup? You need to supply more information.

The filter works great...that was awesome.

In logstash.yml file path.config does not have any entry. I thought the default path would be "/etc/logstash/conf.d/". should I add that path to the path.config section? any other modification required?

Regarding the other issue, I discovered that the logstatsh service is not showing up under services. even though I ran

$systemctl start logstash.service

after installation. The only service that starts automatically is elasticsearch. filebeat is also not running on start.

Regarding the logs, if you are referring to logstash-plain.log...this is what it looks like after restarting the service:

[2019-10-20T22:32:58,642][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2019-10-20T22:32:58,674][INFO ][logstash.inputs.syslog ][main] Starting syslog tcp listener {:address=>"0.0.0.0:1514"}
[2019-10-20T22:32:58,688][INFO ][logstash.inputs.syslog ][main] Starting syslog udp listener {:address=>"0.0.0.0:1514"}
[2019-10-20T22:32:58,706][INFO ][logstash.inputs.udp ][main] Starting UDP listener {:address=>"0.0.0.0:514"}
[2019-10-20T22:32:58,716][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-10-20T22:32:58,747][ERROR][logstash.inputs.udp ][main] UDP listener died {:exception=>#<Errno::EACCES: Permission denied - bind(2) for "0.0.0.0" port 514>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:213:in bind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:inudp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:314:ininputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:306:in block in start_input'"]} [2019-10-20T22:32:58,873][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2019-10-20T22:33:03,755][INFO ][logstash.inputs.udp ][main] Starting UDP listener {:address=>"0.0.0.0:514"} [2019-10-20T22:33:03,757][ERROR][logstash.inputs.udp ][main] UDP listener died {:exception=>#<Errno::EACCES: Permission denied - bind(2) for "0.0.0.0" port 514>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:213:inbind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:in udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:314:in inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:306:inblock in start_input'"]} <

which makes me believe that this might be related to permission issue with listener on standard ports.

  • Does the permission issue on the standard port cause the service not to start, or it's unrelated?
    and what is the solution to that?

  • Also, unrelated to the service, what is the best solution to gather IP geo location for destination addresses?

Thanks again,

Additional logs from today after restarting the system:

[2019-10-21T00:00:13,991][WARN ][org.logstash.dissect.Dissector][main] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}", "event"=>{"host"=>"192.168.1.1", "tags"=>["_dissectfailure"], "@version"=>"1", "@timestamp"=>2019-10-21T04:00:13.891Z, "message"=>"<158>dhcpd: DHCPACK on 192.168.1.207 to 80:a5:89:45:18:32 (###################) via br-lan"}}
[2019-10-21T00:00:13,993][WARN ][org.logstash.dissect.Dissector][main] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}", "event"=>{"host"=>"192.168.1.1", "tags"=>["_dissectfailure"], "@version"=>"1", "@timestamp"=>2019-10-21T04:00:13.892Z, "message"=>"<158>dhcpd: 192.168.1.207 is leased for 86400 seconds, renew in 0 seconds, rebind in 0 seconds"}}
[2019-10-21T00:00:14,090][WARN ][org.logstash.dissect.Dissector][main] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}", "event"=>{"host"=>"192.168.1.1", "tags"=>["_dissectfailure"], "@version"=>"1", "@timestamp"=>2019-10-21T04:00:13.889Z, "message"=>"<158>dhcpd: DHCPREQUEST for 192.168.1.207 from 80:a5:89:45:18:32 (###################) via br-lan"}}
[2019-10-21T00:05:14,290][WARN ][org.logstash.dissect.Dissector][main] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}", "event"=>{"host"=>"192.168.1.1", "tags"=>["_dissectfailure"], "@version"=>"1", "@timestamp"=>2019-10-21T04:05:14.141Z, "message"=>"<158>dhcpd: DHCPREQUEST for 192.168.1.207 from 80:a5:89:45:18:32 (###################) via br-lan"}}
[2019-10-21T00:05:14,291][WARN ][org.logstash.dissect.Dissector][main] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}", "event"=>{"host"=>"192.168.1.1", "tags"=>["_dissectfailure"], "@version"=>"1", "@timestamp"=>2019-10-21T04:05:14.189Z, "message"=>"<158>dhcpd: DHCPACK on 192.168.1.207 to 80:a5:89:45:18:32 (###################) via br-lan"}}
[2019-10-21T00:05:14,292][WARN ][org.logstash.dissect.Dissector][main] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"<%{}>%{program}[%{pid}]: %{action} %{[@metadata][restOfLine]}", "event"=>{"host"=>"192.168.1.1", "tags"=>["_dissectfailure"], "@version"=>"1", "@timestamp"=>2019-10-21T04:05:14.191Z, "message"=>"<158>dhcpd: 192.168.1.207 is leased for 86400 seconds, renew in 0 seconds, rebind in 0 seconds"}}
[2019-10-21T00:05:19,888][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-10-21T00:05:20,435][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>"main"}
[2019-10-21T00:05:20,701][INFO ][logstash.runner ] Logstash shut down.

With systemctl a service can be enabled but that does not imply it starts automatically. Read the systemctl man page for details.

As the documentation says, you may need a conditional to check that the message has an appropriate format before trying to dissect with a given pattern.

The permission error suggest logstash is not running as root, which would be needed if 514 is a privileged port on your version of UNIX. I would not expect that to prevent logstash starting.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.