I am trying to aggregate outside logs into ElasticSearch cluster running in Kubernetes.
Syslogs get published to logstash instance which forward to elasticsearch instance. Syslogs has pattern Timestamp Serial_Number:Process Level Message
Sample :
Oct 11 21:59:02 e3ed8a4e-530f-4658-b39d-049f38cda310:systemd DEBUG waiting for reply from 10.3.1.204:123 (10.3.1.204).
Using https://grokdebug.herokuapp.com/ I was able to create following filter
filter {
grok {
match => {"message" => "%{SYSLOGTIMESTAMP:timestamp} (?<host>[A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}):(?<process>(?<=:).*) %{LOGLEVEL:loglevel} %{GREEDYDATA:body}"}
}
mutate {
add_field => { "process" => "%{process}" }
add_field => { "loglevel" => "%{loglevel}" }
}
date {
match => [ "logdate", "MMM dd HH:mm:ss" ]
}
}
But new added fields process
& loglevel
doesn't have the actual value.
Its shows up just like process: %{process} & loglevel: %{loglevel} in kibana
What is the correct filter pattern I need to use here ?