Logstash grok filter only extracting first match

Hi. Elastic Stack 7.4 in use here. I'm using the grok filter in logstash to extract data from a connection event from a Cisco Firepower. The syslog data is sent to Filebeat on a separate computer and the Filebeat Cisco module . My filter successfully matches the event code and writes that to the document as the field event.code. At present I'm only trying to get one other field SrcPort matched and written to the records as client.port. There are no errors with the Logstash pipeline but the second match fails to produce a write to its field. I've refreshed the Index Pattern in Kibana and can see the field listed there.
Kibana Grok debugger has no issues with either of the match queries and successfully extracts the required data from the message
My filter is:
filter {
grok {
patterns_dir => [ "E:/ProgramFiles/logstash/config/patterns" ]
match => {
"message" => [
"<[>]",
"SrcPort:[\s]*(?<client.port>[0-9]{1,5}),"
]
}
}
}

An example syslog message is:
<118>Feb 6 16:22:14 fw-02-a web: Protocol: TCP, SrcIP: 192.168.0.1, OriginalClientIP: ::, DstIP: 52.184.92.48, SrcPort: 60756, DstPort: 80, TCPFlags: 0x0, IngressZone: Inside, EgressZone: Outside, DE: Primary Detection Engine (8ea39c90-7915-11e8-a1eb-cd0d65f0cc7a), Policy: policyname, ConnectType: End, AccessControlRuleName: inside_internet, AccessControlRuleAction: Allow, Prefilter Policy: DHCP & Terredo, UserName: username, UserAgent: MICROSOFT_DEVICE_METADATA_RETRIEVAL_CLIENT, Client: Web browser, ApplicationProtocol: HTTP, WebApplication: Microsoft, InitiatorPackets: 80, ResponderPackets: 69, InitiatorBytes: 43730, ResponderBytes: 50138, NAPPolicy: Balanced Security and Connectivity, DNSResponseType: No Error, Sinkhole: Unknown, ReferencedHost: dmd.metaservices.microsoft.com, URLCategory: Unknown, URLReputation: Risk unknown, URL: http://dmd.metaservices.microsoft.com/metadata.svc

Any help will be appreciated. Thanks.

What are the different patterns you are trying to parse i.e., patterns mentioned patterns_dir??

Thanks for your response @sai_kiran1. I did look at whether the Grok custom patterns were causing the issue but I'd managed to rule them out by the time I'd posted.
I have managed to get this working now. I'm not sure the code above pasted in correctly as there isn't a proper first match and extract line showing. The main issues were:

  • Referring to child properties needed to be in [parent][child] format rather than my original parent.child format. This was causing an if statement to fail to trigger

  • Multiple match statements within a single grok statement all referencing the same message field. This flagged as bad syntax. I did try multiple patterns within the { "message" => [ pattern pattern ] statement but this also failed. I think I may have to put a comma between each pattern within the square braces to get it to work so I'll give that another try.

My current solution is:

filter {

grok {
patterns_dir => [ "E:/ProgramFiles/logstash/config/patterns" ]
match => { "message" => [ "<[>]" ] }
}
if "<118>" in [message] {
grok {
patterns_dir => [ "E:/ProgramFiles/logstash/config/patterns" ]
match => { "message" => [ "%{CISCOTIMESTAMP:[event][created]}" ] }
}
grok {
patterns_dir => [ "E:/ProgramFiles/logstash/config/patterns" ]
match => { "message" => [ "(?<[observer][hostname]>fw-ext0[1-2]-[A-Ba-b]) web:" ] }
}
}
}

It's a bit of a bulky solution code-wise but its working.

Off to battle a similar issue with mutate on our other system

Grok allows an array of patterns to be matched fo the single field.

You can go like :
grok {
match => { "message" => [ "grok_pattern_1","grok_pattern_2","grok_pattern_3" ] }
}

I gave that a try before because it looks so much tidier but it didn't work. I thought I'd try it again in case something else had been the issue then.
The same working code with multiple grok statements was edited to produce the single grok statement. There are no issues with loading the conf and the documents are written to elasticsearch but grok fails to extract the pattern matches into new fields. The first grok and drop statements work properly.
Maybe its an issue on Windows. I've had a lot of problems getting Elastic Stack up and running and I'm wondering if its a WIndows vs Linux thing as Elastic's products look to be written for Linux primarily. No proof for this so far but will build a Linux stack to test.

Filter that doesn't extract:

filter {
grok {
patterns_dir => [ "E:/ProgramFiles/logstash/config/patterns" ]
match => { "message" => [ "<[>]" ] }
}
if "<166>" in [message] {
if "%ASA-6-302010:" in [message] or "%ASA-6-305011:" in [message] or "%ASA-6-305012:" in [message] or "%ASA-6-607001:" in [message] {
drop { }
}
}
if "<118>" in [message] {
grok {
patterns_dir => [ "E:/ProgramFiles/logstash/config/patterns" ]
match => {
"message" => [
"%{CISCOTIMESTAMP:[event][created]}",
"(?<[observer][hostname]>fw-ext0[1-2]-[A-Ba-b]) web:",
"Protocol:[\s](?<[network][transport]>[\w]{3}),",
"SrcIP:[\s]
%{IPV4:[client][address]},",
"DstIP:[\s]%{IPV4:[server][address]},",
"SrcPort:[\s]
(?<[client][port]>[0-9]{1,5}),",
"DstPort:[\s](?<[server][port]>[0-9]{1,5}),",
"IngressZone:[\s]
%{WORD:[zone][ingress]},",
"EgressZone:[\s]*%{WORD:[zone][egress]},",
"Policy: %{FTD_POLICY:[rule][ruleset]},",
"ConnectType: %{WORD:[ftd][connect_type]},",
"AccessControlRuleName: %{WORD:[rule][name]},",
"AccessControlRuleAction: %{WORD:[rule][action]},",
"Prefilter Policy: %{PREFILTER_POLICY:[ftd][prefilter_policy]},",
"Client: %{FTD_CLIENT:[ftd][client]},",
"ApplicationProtocol: %{FTD_APP_PROTOCOL:[client][application_protocol]},",
"WebApplication: %{FTD_WEB_APP:[client][application]},",
"NAPPolicy: %{FTD_NAP_POLICY:[ftd][nap_policy]},",
"DNSResponseType: %{FTD_DNS_RESPONSE:[dns][response_code]},",
"Sinkhole: %{WORD:[dns][sinkhole]},",
"URLCategory: %{WORD:[url][category]},",
"URLReputation: %{FTD_URL_REPUTATION:[url][reputation]},",
"URL: %{URI:[url][path]}"
]
}
}
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.