Grok pattern for this specific log

I want to filter through this log:
[ ERROR] 02.04.2016. 20:38:19 (FileManagerServlet:handleDownload) Date and time: Sat Apr 02 20:38:19 CEST 2016| miliseconds: 1459622299268| + session id: D4190DFF52C536C500FAF0947DB120DC| userId: 145962184466057
and I'm using the following configuration which on the grok debugger works fine:
[ %{LOGLEVEL:log_level}] %{DATE_EU:date}. %{TIME:time} (%{GREEDYDATA:class}:%{GREEDYDATA:operation}) Date and time: %{GREEDYDATA:full_date_and_time}| miliseconds: %{BASE10NUM:milis}| + session id: %{GREEDYDATA:session_id}| userId: %{BASE10NUM:user_id}
However the outputted result in the terminal I started logstash in gave the following result:
{
"@version" => "1",
"file_name" => "cris_download_log",
"@timestamp" => 2022-02-01T22:05:15.607Z,
"host" => "synnslt6s40663-l",
"path" => "/home/mihailo/Desktop/CRIS_UNS/cris_download_log.log",
"message" => "[ ERROR] 02.04.2016. 20:38:19 (FileManagerServlet:handleDownload) Date and time: Sat Apr 02 20:38:19 CEST 2016| miliseconds: 1459622299268| + session id: D4190DFF52C536C500FAF0947DB120DC| userId: 145962184466057"
}

I want the grok filter to extract the fields and index them in the document, something like this:
{
"@version" => "1",
"file_name" => "cris_download_log",
"@timestamp" => 2022-02-01T22:05:15.607Z,
"host" => "synnslt6s40663-l",
"path" => "/home/mihailo/Desktop/CRIS_UNS/cris_download_log.log",
"logLevel" => "ERROR",
"date" => "02.04.2016",
"time" => "20:38:19",
"class" => "FileManagerServlet",
"operation" => "handleDownload",
"full_date_and_time" => "Sat Apr 02 20:38:19 CEST 2016",
"milis" => 1459622299268,
"session id" => "D4190DFF52C536C500FAF0947DB120DC",
"userId" => 145962184466057
}

my whole logstash-config file looks like this:

Any help would be very welcome!

Grok is based on regular expression and any regular expression can be used in the pattern. As a result, you have to escape special characters for regular expressions.

For example,
[ ERROR]
should match
\[ %{LOGLEVEL:log_level}\]

I recommend not to construct the whole pattern at once, but to add block by block with chekingh them works well.

Kibana Grok Debugger is a useful tool.

And Next time, please use </> button to preformat scripts. Thanks.

The problem is I have logs that have different formats, some fields are the same in some formats however I have to handle those different formats. So the plan was to match the whole format for every type of log. Do you have some other ideas? What do you mean block by block? Could you add some grok configuration snippet as an example?

I intended to a single grok pattern: %{SYNTAX:SEMANTIC}.

If there are several different formats, I will use Grok filter twice. First grok pattern only pick up texts which indicates the whole format. Then I use second Grok pattern according to the format which identified by the texts retrieved by the first Grok filter.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.