I have ELK setup. I am parsing several completely different log formats. They were all one liners. Now I need to add a different one which is a multi line.
Example of two log entries:
error: callback invoked exception. sent payload: [{"key": "values"}]
custom status response: [{"key": "values"}]
callback headers: [{"key": "values"}]
error stack: [ something really bad happened
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)]
error: callback invoked exception. sent payload: [{"key": "values"}]
custom status response: [{"key": "values"}]
callback headers: [{"key": "values"}]
error stack: [ something really bad happened
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)
at here loremisptul (/xx/xx/x)]
This is my logstash configuration file:
input {
beats {
port => "5043"
codec => json
}
}
filter {
if [@metadata][type] == "qa-error"
{
grok {
match => { "message" => "{GREEDYDATA:stackone}" }
}
}
}
output
{
...write it to ES
}
With this setup, every line is written as a separate document in ES. I made a bit research, and it seems I might have to end up using multiline codec inside my filebeat.
But if I place that codes, then all of the other filters are not working properly any more.
How can I approach this ?