Logstash Grok Pattern Watchguard Firewall

Hey Guys,

this is my first topic here in this forum.
I have established an Elasticsearch log management in our company. I'm having a bit of trouble with the grok pattern. If I can't use integrations, I need to create my own pattern for indexing and filtering syslog messages. For our firewall watchguard I need to create some grok filters. The problem is that each firewall policy needs about 5-10 filters because no syslog message looks like the other. We have about 300 policies, so I need to create between 1500 and 3000 grok filters using the grok debugger in Kibana. This is really hard and would waste a lot of time.

The following is an example of a filter I created. How can I do this faster or better?
grok {
match => { "message" => "<%{INT:syslog_id}>%{SYSLOGTIMESTAMP:sylog_timestamp}
%{WORD:firebox_tag} %{WORD:firebox_id} %{DATA:firebox_name}
%{DATA:UNWANTED} firewall: msg_id=%{DATA:msg_id}
% {WORD:event.allow} %{DATA:event.source_network}
%{DATA:event.destination_network} %{DATA:UNWANTED}
%{DATA:event.protocol} %{INT:unbekannte_zweivor_src_ip}
%{INT:unbekannte_vor_src_ip} %{IPV4:event.src_ip} %{IPV4:event.dst_ip}
%{INT:event.src_port} %{INT:event.dst_port} id=%{INT:id} seq=%{INT:seq}
%src_user="%{DATA:event.src_user}" (%{DATA:event.policy})" }
add_field => [ "received_from", "%{IPV4:host}" ]


Thanks for your help.

Welcome to the community!

Your Firebox sends several types of log messages for events that occur on the device. Each message includes the message type in the text of the message. The log messages types are:

  • Traffic
  • Alarm
  • Event
  • Debug
  • Statistic

According to the documentation, there is 5 types log. No matter how many policy you have, fields should be common or optional. You should figure out patterns for the main log types. For instance the fields can be:

  • optional - use ()?
  • replaced with another field on the same position - use | logical OR
  • in CEF format - use cef codec in input
  • key=value lines or part of lines

2014-07-02 17:38:43 Member2 Allow webcache/tcp 42973 8080 3-Trusted 1-WCI Allowed 60 63 (Outgoing-proxy-00) proc_id="firewall" rc="100" src_ip_nat="" tcp_info="offset 10 S 2982213793 win 2105" msg_id="3000-0148"

You can use grok or dissect or csv(if is possible) to split by spaces then rest of line parse with KV- proc_id="firewall" rc="100" .... are for the KV plugin. KV is ece

Hello Rios,
Thanks for your help.

I don´t know how i have to build a pattern with common or optional fields. I implement the grok pattern via cli in /etc/logstash/conf.d/*.conf on a debian server and start logstash with -f argument for the watchguard.conf file. How can i configure a pipeline using grok pattern in kibana? Is there any way to do this? My workaround now is that i have to start logstash with my .conf file via rdp-session after every reboot manually.

Could you give me an example grok pattern including your example fields? Or do you have any instructions that explain this in a way that is easy to understand?

best regards

Can you provide 2-3 log type samples?

Hey, sorry for my late reply

I had to black out a few places, but I hope that's enough anyway

Thats a unfiltered row syslog message:

<188>Sep 7 14:50:09 Secondary C03B027D66266 EEW-HQ-M590 (2023-09-07T12:50:09) firewall: msg_id="3000-0148" Allow srcNet dstNet 69 udp 20 126 XX.XX.XX.XX XX.XX.XX.XX 51495 53 (Firewall_Policy_name)

Thats a filtered row syslog message:

<190>Sep 7 14:50:09 Secondary C03B027D66266 EEW-HQ-M590 (2023-09-07T12:50:09) firewall: msg_id="3000-0151" Allow srcNet dstNet tcp XX.XX.XX.XX XX.XX.XX.XX 65212 88 src_user="username@domain" duration="19" sent_bytes="2170" rcvd_bytes="2411" (Firewall_Policy_name)

My working filter for this syslog message:

<%{INT:syslog_id}>%{SYSLOGTIMESTAMP:sylog_timestamp} %{WORD:firebox_tag} %{WORD:firebox_id} %{DATA:firebox_name} %{DATA:UNWANTED} firewall: msg_id="%{DATA:msg_id}" %{WORD:event.approved} %{DATA:event.source_network} %{DATA:event.destination_network} %{DATA:event.protocol} %{IPV4:event.src_ip} %{IPV4:event.dst_ip} %{INT:event.src_port} %{INT:event.dst_port} src_user="%{DATA:event.src_user}" duration="%{INT:duration}" sent_bytes="%{INT:sent_bytes}" rcvd_bytes="%{INT:rcvd_bytes}" (%{DATA:event.policy})

I would use like this:
<%{INT:syslog_id}>%{SYSLOGTIMESTAMP:sylog_timestamp} %{WORD:firebox_tag} %{WORD:firebox_id} %{DATA:firebox_name} %{DATA} firewall: msg_id="%{DATA:msg_id}" %{WORD:[event][approved]} %{DATA:[event][source_network]} %{DATA:[event][destination_network]} %{DATA:[event][protocol]} %{IPV4:[event][src_ip]} %{IPV4:[event][dst_ip]} %{INT:[event][src_port]} %{INT:[event][dst_port]} src_user="%{DATA:[event][src_user]}" duration="%{INT:duration}" sent_bytes="%{INT:sent_bytes}" rcvd_bytes="%{INT:rcvd_bytes}" \(%{DATA:[event][policy]}\)

  • () must be with backslash \( \)
  • if you don't want value, just not assign %{DATA:UNWANTED} -> %{DATA}
  • because of ECS if you like event JSON object you should use [event][src_user] otherwise you will might have problem in LS and Kibana side, because the field will be named literarily as "event.src_user" not like JSON in ES. This is a sample
   "event": {
      "src_user": "", 
      "source_network": "srcNet"
      "dst_ip": ""

If you decide to use event JSON than you will have to recreate the dataview in Kibana.

1 Like

Thanks for your reply.

I have done it with using

kv {
field_split => " "

Then I get each entry with "something = "" "as split fields. For things like srcNet and dstNet I used the Grok filter as you described. Now it works fine as I can use it.

Thanks for helping!

1 Like