] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"=>\" at line 10, column 84 (byte 218) after filter {\r\n grok {\r\n match => { \"message\" => \"logver=%{NUMBER:log_version}\" \"idseq=%{NUMBER:idseq}\" ", :backtrace=>[
Can someone please look and let me know what I am missing from the GROK pattern?
Note: I have change the pattern a few times and each time the error is the same.
The compiler is consuming logver=%{NUMBER:log_version}" as the pattern that it needs to match against [message]. Then it is taking "idseq=%{NUMBER:idseq}" as the next field name and expecting to find => to separate it from "itime={NUMBER:itime}".
You could put single quotes around the entire pattern
match => { "message" => '"logver=%{NUMBER:log_version}" ... "rcvddelta=%{DATA:rcvddelta}"' }
or, more likely, remove all of the double quotes, since they do not appear in the log line you are trying to parse.
You can probably replace a lot of those DATA patterns with NOTSPACE, which will fail far faster if the log line does not match. To see what I mean, try deleting logver= from a log line and then sending it through that grok filter.
You might also want to condider using a kv filter instead of grok.
The documentation is here. The default separator for keys and values is =, and the default separator for key=value pairs is space, which means you can use the defaults and
etc. It even silently strips the quotes around field values if they are present. If some of your fields contain spaces it can get a bit more complicated, but cross that bridge when you come to it.
BTW, if you are parsing Fortinet logs in the elastic stack then starting with grok in logstash may well not be the way to do it.
Right, you will be getting a _grokparsefailure. Your pattern is very, very complicated, and every field has to match. The way to debug that is to start with
if that works then add the next field. At that point you will hopefully realize that when you add itime={NUMBER:itime} it stops working, which happens because you are missing a %.
As you keep adding fields you will discover problems like
which means you need to add double quotes around those fields in the pattern (using a kv filter fixes this for you by default).
Keep adding and it will blow up at srcip because you damaged with log line when redacting the IP address. You may also find that having the date and time values split across 6 fields instead of two is not useful, so you need to define custom patterns to consume them.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.