_grokparsfailure on core patterns

Hello!
I'm getting weird grok failures after adding my own patterns. And I have checked my patterns they shouldn't even come close to match the standard once in the core patterns.
This is for Cisco ASA firewalls and following is the failure:
Aug 17 15:46:55 firewall2 : %ASA-6-302020: Built outbound ICMP connection for faddr 1.2.3.4/0(domain\test) gaddr 10.20.30.40/0 laddr 10.20.30.40/0
Cisco_messages = Built outbound ICMP connection for faddr 1.2.3.4/0(domain\test) gaddr 10.20.30.40/0 laddr 10.20.30.40/0

What I can see is that it goes through my filter adding tags and so. But the fails when handling the cisco_message.
Here are an example of my filter config:
filter { if [type] == "cisco-asa" { grok { match => ["message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host}( :)? %%{CISCOTAG:cisco_tag}: %{GREEDYDATA:cisco_message}"] add_tag => "ASA_log" } grok { patterns_dir => "/etc/logstash/patterns/" match => [ "cisco_message", "%{CISCOASA715047}", "cisco_message", "%{CISCOASA713236}", "cisco_message", "%{CISCOASA711004}", "cisco_message", "%{CISCOASA710002}", "cisco_message", "%{CISCOASA609002_1}", "cisco_message", "%{CISCOASA710005}", "cisco_message", "%{CISCOASA434004}", "cisco_message", "%{CISCOASA607001}" ] } }}
Anyone got some ideas?
I have tried to add the match in my filter file but didn't help.

What are all the underscores doing in your configuration? Please always format configuration snippets as code with the </> button.

What does a processed message look like? Please show the result of a stdout { codec => rubydebug } output.

Alright will remeber it for next time. Edit of the post didn't go that well.

I got quite a lot of message going through so I hope it is enough to use the json from Kibana instead?
{ "_index": "filebeat-2016.08.18", "_type": "cisco-asa", "_id": "AVacIQl30ixb02L-V0w6", "_score": null, "_source": { "message": "Aug 18 07:30:22 firewall2 : %ASA-6-302020: Built inbound ICMP connection for faddr 1.2.3.4/1 gaddr 192.168.1.2/0 laddr 192.168.1.2/0(DOMAIN\\User)", "@version": "1", "@timestamp": "2016-08-18T05:30:22.000Z", "source": "/opt/company/log/firewall2_2016-08-18.log", "offset": 800765591, "type": "cisco-asa", "fields": null, "beat": { "hostname": "syslog.company.com", "name": "syslog.company.com" }, "input_type": "log", "count": 1, "host": [ "syslog.company.com", "firewall2" ], "tags": [ "beats_input_codec_plain_applied", "ASA_log", "_grokparsefailure" ], "timestamp": "Aug 18 07:30:22", "cisco_tag": "ASA-6-302020", "cisco_message": "Built inbound ICMP connection for faddr 1.2.3.4/1 gaddr 192.168.1.2/0 laddr 192.168.1.2/0(DOMAIN\\User)", "syslog_severity_code": 5, "syslog_facility_code": 1, "syslog_facility": "user-level", "syslog_severity": "notice" }, "fields": { "@timestamp": [ 1471498222000 ] }, "highlight": { "cisco_tag": [ "ASA-@kibana-highlighted-field@6@/kibana-highlighted-field@-302020" ], "message": [ "Aug 18 07:30:22 firewall2 : %ASA-@kibana-highlighted-field@6@/kibana-highlighted-field@-302020: Built inbound ICMP connection for faddr 1.2.3.4/1 gaddr 192.168.1.2/0 laddr 192.168.1.2/0(DOMAIN\\User)" ], "tags": [ "@kibana-highlighted-field@_grokparsefailure@/kibana-highlighted-field@" ] }, "sort": [ 1471498222000 ] }

Okay, so cisco_message seems to have the expected value. I suggest you make a simple configuration with a stdin input, a stdout output, and a grok filter that matches message against the CISCOASA pattern you expect it to match. Then, peel the onion by replacing the grok pattern name with its definition, so something like this:

grok {
  match => ["message", "%{CISCO_DIRECTION:direction} %{WORD:protocol} connection %{CISCO_ACTION:action} from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} flags %{GREEDYDATA:tcp_flags} on interface %{GREEDYDATA:interface}"]
}

Then, step by step, remove tokens from the end of the expression and run Logstash, feeding it the input string via stdin. Remove tokens until what's left of the expression matches. Then you've found the culprit.

1 Like

Well the pattern should be in the core pattern, from what I can see.
From "https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/firewalls":
# ASA-6-302020, ASA-6-302021 CISCOFW302020_302021 %{CISCO_ACTION:action}(?: %{CISCO_DIRECTION:direction})? %{WORD:protocol} connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:(%{DATA:fwuser}))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( (%{DATA:user}))?
And when I try with "https://grokdebug.herokuapp.com" it passes through.

So my problem isn't how I should make a pattern more how it can happen that the standard once aren't getting matched.

So my problem isn't how I should make a pattern more how it can happen that the standard once aren't getting matched.

Yes, I understand what the problem is. I still think it's a good idea to make sure that your input actually matches one of the available patterns. If it does then great, then you can add more and more of your current configuration and slowly converge into what you currently have that doesn't work.

@magnusbaeck Not sure how your suggestion should work. The match you provided are not for this log entry.
You want me to try and match it against my own CISCOASA patterns?

Also with "remove from the end of the expression" do you mean the feed string or the match string?
If I take away match tokens I will get _grokparsefailure every time.

The match you provided are not for this log entry.

It was just an example.

You want me to try and match it against my own CISCOASA patterns?

No, the stock ones that you want to match against.

Also with "remove from the end of the expression" do you mean the feed string or the match string?

The expression you're matching against. But you could do it the other way around; start with the simplest possible expression, like

%{CISCO_DIRECTION:direction} 

and make sure that works. Then add the next token,

%{CISCO_DIRECTION:direction} %{WORD:protocol}

and so on until you either reach the end of the expression or you get _grokparsefailure. In the latter case you've found the culprit in the token that you added most recently.

Now I follow, I have done most of that and it checks out. Will go through my own patterns again.
And triple check my configuration.

I have reverted my files to when it worked and can now say it is a typo that made this happen. Thanks for the help @magnusbaeck. It is working as I want it too.
I view this matter closed.