Hello.
I'm having troubles with parsing the response of making PING to our servers, my failure is with the PING error packets.
With a good PING its ok, but parsing the next line doesn't work:
--- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
So, all the good pings are stored OK, but the one that doesn't work gives me a _grokparsefailure.
I use the next code to match:
input {
tcp {
port => 50000
type => pingremote
codec => multiline {
pattern => "^---"
what => "next"
}
}
udp {
port => 50000
type => pingremote
codec => multiline {
pattern => "^---"
what => "next"
}
}
}
filter {
if [type] == "pingremote" {
grok {
match => { "message" => [ "%{NUMBER:bytes} bytes from %{HOSTNAME:TO_HOST} \(%{IP:iphost}\): icmp_seq=%{NUMBER:icmpseq:int} ttl=%{NUMBER:ttl:int} time=%{NUMBER:ms:float} ms",
"--- %{HOSTNAME:TO_HOST} ping statistics --- 1 packets transmitted, 0 received, %{GREEDYDATA}1 %{GREEDYDATA} %{NUMBER:ms:float}ms"
]
}
add_field => [ "FROM_HOST", "%{host}" ]
}
mutate{
remove_field => ["bytes","port","host", "icmpseq", "ttl", "message"]
}
if "_grokparsefailure" in [tags] {
drop { }
}
}
}
output {
if [type] == "pingremote"{
elasticsearch {
hosts => ["X.X.X.X:9200"]
index => "pingremote-%{+YYYY.MM.dd}"
}
}
}
Basicaly I join the starting with "---" with the next line so I can obtain the name of the host and match with 1 packet lost.
In grokdebug I tested it with the next match and worked, but it doesn't work in the real life. --- %{HOSTNAME:ip} ping statistics --- 1 packets transmitted, 0 received, \+1 errors, 100% packet loss, time %{NUMBER:ms:float}ms
Thank you and sorry if I didn't explained it well.
I think you may need to do a "break_on_match" for this. For example, from a grok filter I use:
grok {
break_on_match => true
match => [ 'message', '%{NUMBER:bytes} bytes from %{HO...
match => [ 'message', '%{NUMBER:bytes} bytes from %{HO... a VERY similar line here but with the
failure information instead
Also I note you're using more brackets than I am, and " where I'm using ' so you could change that up too perhaps? i.e. your message has an extra "=>" after it and then a "[" before starting the contents of the message, mine doesn't.
The problem is not the way I do the matching, because it works well with other data. The difference between my code and yours is that I use one match that has a "table" containing all the possible matchings of the input data, and you use one match for each data.
I think the problem is on the multiline codec that has to add something to the data that doesn't show in the print, but grok notices it and makes to not match the string.
Also, the break_on_match feature is for stop the matchings if it match on the first match, isn't it?
Hi, no worries, I'm hoping to "pay it forward" and help others. With any luck, someone can help me with my issues
I tried doing a "table" that matched all data, but I found that entries when viewed in Kibana then had empty fields for those docs that didn't meet the grok match. So I found it tidier to use the "break_on_match". You're right, break_on_match will stop processing when it first matches so I put them in an order that was logical where it would break on the first match it finds, working from the top down until it hits..
Sorry, I completely missed that you were using the multiline codec!! I am using filebeat for this particular input so may not be much help... maybe my "noise" will help you find the right answer somehow though!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.