Field Values from Pattern Directory

Hello! I'm looking to pull field values from a pattern defined in /etc/logstash/conf.d/patterns

Background:
I'm pulling pfsense firewall logs into Kibana via Logstash. And for the most part, it works. But it's missing some IPv6 data from the logs. For some reason it just cuts off two of the fields. It seems to stem from this pattern call:

PFSENSE_IP_SPECIFIC_DATA (%{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA})

where further down it defines the ipv4 data and the ipv6 data to parse. Oddly, if I call ONLY the ipv6 data (rather than having it go through the "or" operator), it pulls all the info just fine. Or at least that's the the debugger on http://grokdebug.herokuapp.com/ tells me.

Anyway, I figured I'd do a check for the IP version and split off from there based on "4" or "6" being returned. I'm stuck though, in calling a field from the pattern. I first pull the first part of the log line using:

match => [ "message", "%{PFSENSE_LOG_DATA}" ] which is defined as:

PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),

But if I try to check the value of [PFSENSE_LOG_DATA][ip_ver] things fall apart.

How do I read a value from a provided pattern file?

Here's the logstash filter conf:

filter {
if "pfsense" in [tags] {
grok {
match => [ "message", "%{MONTH} %{MONTHDAY} %{TIME} (?.?): (?.)"]
}
mutate {
replace => [ "message", "%{msg}" ]
}
mutate {
remove_field => [ "msg", "%{MONTH}", "%{MONTHDAY}", "%{TIME}" ]
}
if "filterlog" in [prog] {
grok {
add_tag => [ "firewall" ]
patterns_dir => "/etc/logstash/conf.d/patterns"
match => [ "message", "%{PFSENSE_LOG_DATA}" ]
}
if "4" in [PFSENSE_LOG_DATA][ip_ver] {
grok {
patterns_dir => ["/etc/logstash/conf.d/patterns"]
match => [ "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
"message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}" ]
}
else {
grok {
patterns_dir => ["/etc/logstash/conf.d/patterns"]
match => [ "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv6_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
"message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}" ]
}
}
}
mutate {
lowercase => [ 'proto' ]
}
}
}
}

Thanks in advance!

Hi Scott,

Could you please provide some sample log lines? It would be helpful.

Sure can - though I did notice I didn't bracket out my patterns dir towards the top of the filter conf. And I might have an extra curly brace the bottom.

But logs, sure. I'm assuming you'd like what's trying to be read in?

IPv6:
Sep 13 00:02:27 pfsense-hostname.xxxx.xxx filterlog: 5,16777216,,1000000003,em1,match,block,in,6,0x00,0x00000,1,UDP,17,104,XXX::XXXXXXXXX:8687:d647,XXXX::1:2,546,547,104
Sep 13 00:02:28 pfsense-hostname.xxxx.xxx filterlog: 5,16777216,,1000000003,em1,match,block,in,6,0x00,0x00000,1,UDP,17,104,XXXX::XXXXXXXXX:8687:d647,XXXX::1:2,546,547,104
Sep 13 00:02:30 pfsense-hostname.xxxx.xxx filterlog: 5,16777216,,1000000003,em1,match,block,in,6,0x00,0x00000,1,UDP,17,104,XXX::XXXXXXXXX:8687:d647,XXXX::1:2,546,547,104

IPv4:
Sep 13 00:06:28 pfsense-hostname.xxxx.xxx filterlog: 9,16777216,,1000000103,em1,match,block,in,4,0x0,,64,5620,0,none,6,tcp,63,XXX.XXX.XXX.XXX,23.92.XXX.XXX,51261,4070,11,PA,2987092353:2987092364,3522064689,4096,,nop;nop;TS
Sep 13 00:10:06 pfsense-hostname.xxxx.xxx filterlog: 9,16777216,,1000000103,em1,match,block,in,4,0x0,,128,31634,0,DF,6,tcp,40,XXX.XXX.XXX.XXX,107.23.XXX.XXX,62875,443,0,FA,24313673,364093152,256,,
Sep 13 00:17:26 pfsense-hostname.xxxx.xxx filterlog: 9,16777216,,1000000103,em1,match,block,in,4,0x0,,64,36450,0,none,6,tcp,896,XXX.XXX.XXX.XXX,193.235.XXX.XXX,56763,443,844,FPA,1689868723:1689869567,2685202277,4096,,nop;nop;TS

Hostnames and IP addresses blocked out. I'm going to double check those brackets and braces but I'm not entirely sure that's the cause.

Hi Scott,

I am not sure why custom patterns are needed in your scenario. Is there a specific need to use custom patterns?

Purely, from a log parsing perspective, I see two options:

  1. Use Grok to parse the first three fields (Timestamp, hostname, and filterlog). Then use the %{GREEDYDATA} to consume the remaining fields. You can then apply the CSV filter on the GREEDYDATA field.

  2. Or you can directly write a GROK for the entire string. I have trimmed the log to write a quick grok. Here is my log file and the corresponding grok filter.

Sep 13 00:02:27 pfsense-hostname.xxxx.xxx filterlog: 2001:0db8:85a3:0000:0000:8a2e:0370:7334

I have trimmed the extra fields, in order to write this filter. However, you can easily write the necessary patterns. Here is the GROK for the above log.

%{SYSLOGTIMESTAMP} %{USERNAME} %{WORD}\: %{IP}

I hope this answers the question. Let me know! :slight_smile:

It's a trial by fire, basically :stuck_out_tongue:

Our organization has just recently made the jump into the ELK stack and I'm having to learn on the fly by simply doing task X and figuring out what works and what doesn't. I got into looking for a way to get our pfsense box logs into the ELK and found a few solutions, most of which included a pattern file with 27 different definitions (some are nested in each other). Some of the config files were over two years old and needed some rewriting (which is fine; I learn better by doing anyway).

The definitions pull in all the data and by and large they work. Except for this one thing I posted about. If it pulls IPv6 out of the IP_SPECIFIC_DATA call, it cuts out the last two fields, which is the protocol name and id number. But ONLY if ICMPv6 is listed. UDP comes in and gets collected by logstash just fine. Then I tried IPv6 parsing from the debugger calling the IPv6_SPECIFIC_DATA directly (instead of through the | operator and it pulled all the fields from the log text.

And so here we are with the if/else statements on the ip_ver.

I'll give the %{GREEDYDATA} call a try and see how that goes. I just wasn't sure if I was missing something (my Google-Fu is usually pretty good) or I was going about it the hard way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.