Grok parsefailure

Hi experts,

I have a new problem, and this time, i don't think i can resolve it myself.
I'm trying to parse a little message like that:

2020-02-12 13:52:04.15 spid4s      Execution of SSIS_HOTFIX_INSTALL.SQL completed

These messages are coming from SQL error.

I trying my grok on online site, and actually, he is matching with my message.
That's my filter configuration:

filter {
  if "beats_input_codec_plain_applied" in [tags] {
    mutate {
      remove_tag => ["beats_input_codec_plain_applied"]
        }
    }
  if "ERRORLOG" in [tags] {
    grok {
      match => { "message" => ["%{DATESTAMP:date} %{WORD:service} %{GREEDYDATA:message}"] }
        }
    }
  if "ERRORLOG" in [tags] {
    mutate {
      remove_tag => ["message"]
    }
    }
}

I tried many syntax for message like
["message", "%{PATTERN} "]
{"message" => "%{PATTERN}"}

I think i have a syntax error on filter configuration, and finally, it doesn't work and i have _grokparsefailure.

Any idea to help me?

Regards.
Jonathan

That is not a DATESTAMP. A DATESTAMP starts with a DATE, which would be 02/19/2020, or 19/02/2020, or various other things, but it cannot start with a year. You appear to have a TIMESTAMP_ISO8601 there.

With %{TIMESTAMP_ISO8601:date} %{WORD:service} %{GREEDYDATA:message}, i have always the same problem, a _grokparsefailure, is there a specific syntax for the filter?

Are you sure there is a space after %{WORD:service} and not a tab?

Hi @Badger

Thanks a lot for your time.

No i'm not sure for the tab :frowning: and i found an other problem, logs on MSSQL are encoding with UCS-2 LE BOM, and it's not an encoding option in logstash.
With a rubydebug, i have this message in output:

"message" => "\u0000\t\u0000S\u0000t\u0000a\u0000n\u0000d\u0000a\u0000r\u0000d\u0000 \u0000E\u0000d\u0000i\u0000t\u0000i\u0000o\u0000n\u0000 \u0000(\u00006\u00004\u0000-\u0000b\u0000i\u0000t\u0000)\u0000 \u0000o\u0000n\u0000 \u0000W\u0000i\u0000n\u0000d\u0000o\u0000w\u0000s\u0000 \u0000S\u0000e\u0000r\u0000v\u0000e\u0000r\u0000 \u00002\u00000\u00001\u00006\u0000 \u0000D\u0000a\u0000t\u0000a\u0000c\u0000e\u0000n\u0000t\u0000e\u0000r\u0000 \u00001\u00000\u0000.\u00000\u0000 \u0000<\u0000X\u00006\u00004\u0000>\u0000 \u0000(\u0000B\u0000u\u0000i\u0000l\u0000d\u0000 \u00001\u00004\u00003\u00009\u00003\u0000:\u0000 \u0000)\u0000 \u0000(\u0000H\u0000y\u0000p\u0000e\u0000r\u0000v\u0000i\u0000s\u0000o\u0000r\u0000)\u0000\r\u0000"

So i think my first problem is with encoding, but if i have a tab between WORD en GREEDYDATA, is there a logstash pattern to ignore it?

%{SPACE} will match any whitespace, including tab.

Thanks a lot, i will waiting for SQL administrator to change encoding, and i will add %{SPACE}.
I hope it will be a success

Hello @Badger
After modifying encoding and %{SPACE} for errorlog file, it's working well. Thanks a lot.

But i have an other problem with an other file, it's again a _grokparsefailure, i see in rubydebug why:

"message" => "2020-02-12 13:52:12.593\tMSSQLFDLauncher service received control message."

There is a \t in json message, i think it's the problem with grok parse failure, i tried many tests like

%{EXIM_DATE:date} [,]\t+-?%{GREEDYDATA:message}

Have you any idea?

Thanks a lot again for your time.

Regards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.