Parsing Rule works in Grok Debugger but not when running in logstash with MSSQL

I am attempting to read Microsoft SQL Error logs in via filebeat -> logstash. Testing the rule below against the logs below works in the grok debugger, but fails when running in production.

%{TIMESTAMP_ISO8601:time} %{WORD:event_category}\s+%{WORD:event}:? %{GREEDYDATA:rest_of_message}

2019-04-21 00:15:10.28 Logon       Error: 18456, Severity: 14, State: 38.
2019-04-21 00:15:08.24 Logon       Login succeeded for user 'domain\realuser'. Connection made using Windows authentication. [CLIENT: 192.168.10.10]

In the full logstash config, I have the below snippet, and I am getting the "pre header" tag but not the "parsed header" tag, so I know this bit is where the issue is. The only thing I can make the logs match against in logstash is %{GREEDYDATA}.

if "_grokparsefailure" not in [tags] {
    mutate {
        add_tag => "pre header"
    }
}
grok {
    match => [
        "message", "%{TIMESTAMP_ISO8601:time} %{WORD:event_category}\s+%{WORD:event}:? %{GREEDYDATA:rest_of_message}"
    ]
}
if "_grokparsefailure" not in [tags] {
    mutate {
        add_tag => "parsed header"
    }
}

Is there potential for grokparsing to fail due to encoding? I am pretty sure these logs are utf-16 encoded. I'm just not sure what to try at this point.

Any help is appreciated.

Figured it out! MSSQL Error log is encoded in UTF-16LE. I hadn't put

encoding: utf-16le

in filebeats as part of the input, so that is why it wasn't matching.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.