I haven't used grok filter before and having a little trouble getting the results I need. Ultimately I'm pulling some data into ElasticSearch from a database query using Logstash and the JDBC input plugin. Mostly irrelevant, but what I'm trying to accomplish here is to 'normalise' a version field for more meaningful reporting in Kibana.
For example a product version could be anything on the left, and I want to normalise it to just major and minor version on the right as shown below to report at that level and put that in a different field (such as version_normalised).
Examples:
version -> version_normalised
"10" -> "10.0"
"10.2 -> "10.2"
"10.2(1)" -> "10.2"
"10.2(1)U1" -> "10.2"
"10.2.1.10032" -> "10.2"
So as you can see versions can be expressed in different ways in the incoming data which is splitting reporting up, so we want to try and 'normalise' it. Anyway as best as I can tell this may be best accomplished in grok. I have been doing some testing with some very simple config I have to break up the problem a bit but I'm stuck even just matching multiple numbers.
For example if I have the following it seems to work okay and when I test with a single number "9", "10" for example it sets it to MAJOR as expected:
input {
stdin { }
}
filter {
grok {
match => { "message" => "(?<MAJOR>^\D+)" }
}
}
output {
stdout {
codec => rubydebug
}
}
But if I try to get multiple matches to try and match a "11.22" for example I just get parse failure no matter what the input:
input {
stdin { }
}
filter {
grok {
match => { "message" => [
"(?<MAJOR>^\D+)",
"(?<MAJOR>^\D+)\.(?<MINOR>\D+)" ] }
}
}
output {
stdout {
codec => rubydebug
}
}
Now I have tried multiple variations also but I didn't want to complicate things too much here in the initial post.
Apologies I'm new to this filter so I may have some follow-up questions also but can't seem to get past some basic matching at this stage. Appreciate any guidance, thank you.