Normally when I get a grok exception error, the tags field is populated on the record that failed. We don't use that field for anything so when it exists I know something went wrong.
"tags": [
"_grokparsefailure"]
Assuming you don't normally use the tags field, you can find any instance of it existing using "exists:tags" in Kibana.
If your Grok rules create new fields, then just look at what fields don't exists but should have, and then you know which rule went wrong.
Once you have the record you can use http://grokconstructor.appspot.com/do/match to help debug where it went wrong.
From what i found it is related with groks with regexp ranges ([]) that include the ] (and maybe other characters).
In my config, a valid regexp [^]] was giving problems in logstash.... I had to replace that with [^]] and after that the warning disappeared
I think i had another example, but from what i remember, it was also a problem with the regexp ranges
For other users searching for this:
no grok test tool could point me in the correct direction, nor logstash configtest, only by trial and error i could found the issue. I started to include a tag in every filter file, to see what filters where applied to each message and checked all failed _grokparsefailure messages to understand why did they failed. After some cleanup, i finally managed to find a example that the https://grokdebug.herokuapp.com/ and https://grokconstructor.appspot.com/ could parse without problems, but logstash kept rejecting. Then started to break that grok (removing blocks and resent the message) until i found the block that was failing... another fight was to find how to fix it, but luckily that was faster than the previous steps
Maybe elastic can check for this special case in next versions on the config test
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.