I am trying to parse ADFS authentication logs from a winlogbeat data stream. My winlogbeat pre-parses these logs by putting everything unrecognized into fields event_data.param1, event_data.param2, etc, which are not meaningful. I'm using a grok filter in Logstash to create meaningful fields in my final index. I have ADFS documents with the following event_data.param3 field pre-parsed by winlogbeat:
event_data.param3: "jsmith@example.com-The referenced account is currently locked out and may not be logged on to"
My grok filter configuration, which works for all messages in the debugger, looks like:
grok {
match => {
[event_data][param3] => [
"^%{GREEDYDATA:user_name}-%{GREEDYDATA:reason}"
]
}
}
I get no resulting user_name or reason fields in my index from this configuration and no errors in the logstash logs.
Then I tried to eliminate the nested event_data.param3 reference using the following:
mutate {
add_field => {
"test_string" => "%{[event_data][param3]}"
}
}
grok {
match => {
test_string => [
"^%{GREEDYDATA:user_name}-%{GREEDYDATA:reason}"
]
}
}
This works fine. Any clues how I should add nested field names as the string to match in grok?