Ok so, if that's yout input, you can easily extract what you want with the following grok filter:
filter {
grok {
match => { "message" => "%{IPORHOST:test}.*loss = %{DATA}\/%{DATA}\/%{NUMBER:loss}.*= %{NUMBER:min}\/%{NUMBER:avg}\/%{NUMBER:max}" }
}
}
NOTE: if you want the percentage symbol after loss, replace %{NUMBER:loss}.*
with %{DATA:loss},.*
(watch the comma after the closing curly bracket).
Obviously, you need to find a way to split your input in order to have logstash process multiple events.
If you don't find a way to do that and you find yourself with a single big line of logs, you can alway use a Ruby filter to split it over the \n
character, assign the result to an array field and then use the split
filter over it and finally apply the grok to the single event.
Something like the following:
input {
generator {
count => 1
lines => [ '8.8.8.8 : xmt/rcv/%loss = 2/2/0%, min/avg/max = 23.9/24.5/25.1\n8.8.4.4 : xmt/rcv/%loss = 2/2/0%, min/avg/max = 25.8/26.0/26.1\n' ]
}
}
filter {
ruby {
code => "
message = event.get('message')
events = message.split('\n')
event.set('events', events)
"
}
split {
field => "events"
}
grok {
match => { "events" => "%{IPORHOST:test}.*loss = %{DATA}\/%{DATA}\/%{NUMBER:loss}%{DATA}= %{NUMBER:min}\/%{NUMBER:avg}\/%{NUMBER:max}" }
}
mutate {
remove_field => ["message"]
}
mutate {
rename => ["events", "message"]
}
}
output {
stdout{}
}
which outputs this:
{
"avg" => "24.5",
"sequence" => 0,
"loss" => "0",
"min" => "23.9",
"@timestamp" => 2020-03-16T18:25:47.240Z,
"max" => "25.1",
"@version" => "1",
"test" => "8.8.8.8",
"message" => "8.8.8.8 : xmt/rcv/%loss = 2/2/0%, min/avg/max = 23.9/24.5/25.1",
"host" => "fabio"
}
{
"avg" => "26.0",
"sequence" => 0,
"loss" => "0",
"min" => "25.8",
"@timestamp" => 2020-03-16T18:25:47.240Z,
"max" => "26.1",
"@version" => "1",
"test" => "8.8.4.4",
"message" => "8.8.4.4 : xmt/rcv/%loss = 2/2/0%, min/avg/max = 25.8/26.0/26.1",
"host" => "fabio"
}
NOTE: the input generator is simply to simulate your input (according to what you posted)