I am currently running into some parsing issues within LogStash. Here is my conf file.
filter {
if [type] == "srr" {
kv {
trim => "\{\}\"\,"
trimkey => "\{\}\"\,"
source => [message]
field_split => "\\n"
value_split => ":"
}
kv {
source => [Message]
field_split => "\\n"
value_split => ":"
}
### This grok matches date format "Month Day HH:SS:MM" and YYYY just after, followed by a number in another entry.
## Be sure to include separation characters like a space, or "\r" when obtaining data from another JSON entry.
grok {
match => { "Message" => "%{SYSLOGTIMESTAMP} %{NUMBER:Year}\\n%{NUMBER:EventID}" }
}
#mutate {
# add_field => {"EventID" => "$1"}
#}
if "Failed to join" not in [message] {
drop {}
}
}
}
The reason I have 2 kv instances is that for some reason my first one takes the [message] field and parses the data, but puts the new data in the [Message} field (capital M). Not really sure why that's happening because that doesn't happen with my other kv uses. Very strange, but not unmanagable because I just add another kv to point to the other field.
Then the grok is only in there to have the end goal of matching the Windows Event ID that doesn't have a field associated and that works fine.
What is super strange though is this data...
This is a sample of the parsed data in the [Message] field:
"Message": "Jun 11 09:58:23 2018\\n666\\nSecurity\\n
so everything is separated by "\n" which is what I have in the second kv. This works fine up until a certain spot in the log.
Here, it's fine:
and
What I dont understand is why it's broken here...
when that data should be
\\nTarget Account Name: College Station Test\\n
so it should be showed as "[Target Account Name] : College Station Test"
I think it has something to do with the kv and lowercase "n"'s . Any advise would be welcomed.