I have deployed Filebeat -> Logstash -> ElasticSearch -> Kibana stack for log monitoring in my testing environment. I have two GROK patterns specified in the Logstash configuration. The problem is that few of the messages go unparsed (without applying GROK filter) from Logstash to ElasticSearch and that too without any parse failure exception. Also, the filebeat parameters like beat.name,beat.hostname and source are missing in the final message to ElasticSearch.
The same GROK patterns did not give any problem when I was using ELK stack without Filebeat. So, I am not sure if it has something to do with Filebeat or Logstash.
Note : The unparsed messages are less in number in comparison to total number of messages [1K in 1Million log lines]
means they are some how successful, without seeing your configs and the data it will be almost impossible to tell you what is going on.
What I like to do is - on every grok statement (or group of groks) is add a tag so that I can see which filters it actually hit.
add_tag => [ "rule-%{type}-999} ] this way I know where it was. I don't get a lot of tags because I wrap every type of grok with if [type] =~ /apache/ this way only my Apache groks are used.
If you can post some of your code and data we could try to figure it out
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.