Need help with grok filtering

Hello everyone im working for several few days on loggind the f5 Big Data with Elk but the parsing of log its not working at all on logstash with Grok (and i don't find the patterns necesary for this), how would you match this "message" field to separate every field?:
Log example:
Aug 1 11:28:54 f5_01 info tmm1[18879]: Rule /dnsfrontend/iRule_loggingApache <HTTP_RESPONSE>: 10.68.208.116 01/08/2018 11:28:54 -0300 4 "POST /wsregistro/rest/consulta" 200 "" ""
Grok example:

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_$

This is going from F5 Big Data to a Red Hat vm and then by rsyslog and then gets re send to remote ELK from filebeat

Kind regards and i hope this gets understoud since English its not my first lenguage.

If lines always look like that I would use dissect, not grok. This might work...

dissect { mapping => { "message" => '%{ts1} %{+ts1} %{+ts1} %{hostname} %{loglevel} %{program}[%{pid}]: %{w1} %{w2} %{w3}: %{ip} %{ts2} %{+ts2} %{+ts2} %{n1} "%{req}" %{response} "" ""%{}' } }

However, if the 'Rule /dnsfrontend/iRule_loggingApache <HTTP_RESPONSE>: ' part varies that might not work and you might have to grok it.

First thanks a lot for your response, the problem is that "dnsfrontend" refers to the app and the "iRule_loggingApache" its only the name of the IRULE on f5 big data that it will allways be the same, the problem is that in the future i need it to get filtered by appname so we can separate in kibana the request/responses by app.

Regards and thanks

OK, so something like

 dissect { mapping => { "message" => '%{ts1} %{+ts1} %{+ts1} %{hostname} %{loglevel} %{program}[%{pid}]: %{w1} /%{appname}/%{rulename} %{w3}: %{ip} %{ts2} %{+ts2} %{+ts2} %{n1} "%{req}" %{response} "" ""%{}' } }

Really thanks a lot with that disection i was abble to separate every app by the field rulename.

Thanks a lot again, Victor.

So finally i need this to be filter by grok, any advice or help you can give me? im triying since a few days already

Also i need the "ts" fields to be merged together

In dissect that is what "%{ts1} %{+ts1} %{+ts1}" and "%{ts2} %{+ts2} %{+ts2}" do, the three are merged into a single field.

it didn't work than because all 3 are separated fields and i want all that information to be together but i need the grok working cause i want the timestamp to be replaced by this ts fields

What are the three separate fields called?

this is how i see it on "discover" in Kibana:
|t ts| |Aug |
|t ts1| |8|
|t ts2| |08/08/2018 12:37:19 -0300|

Originally my post had a typo: "%{ts} %{+ts1} %{+ts1}". If you change the ts to be ts1 it should work.

Still the discover:
|t ts| |Aug|
|t ts1| | 8|
|t ts2| |08/08/2018 14:34:49 -0300|
=(

If you are still seeing ts in newly ingested events then you have not updated the dissect filter.

this is how its now:
dissect { mapping => { "message" => '%{ts} %{+ts1} %{+ts1} %{hostname} %{loglevel} %{program}[%{pid}]: %{w1} /%{appname}/%{rulename} %{w3}: %{ip} %{ts2} %{+ts2} %{+ts2} %{n1} "%{req}" %{response} "" ""%{}' } }
and then i restarted the logstash service but that's what is showing the discover =(.

You need to change that to %{ts1} %{+ts1} %{+ts1}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.