Grokparsefailure in logstash when consuming beats

I am seeing _grokparsefailure tags in my output from logstash even though all the fields are getting values. My input is from a filebeat source. Here is som json output from my filebeat.

My logstash pipeline config looks like this

input {
    beats {
        port => "10002"
    }
}
filter {
    grok {
        match => {
            "message" => "%{TIMESTAMP_ISO8601:server_time} (\[\s*[^]]+\] )?%{WORD:log_level}"
        }
    }
}
output {
    stdout { codec => rubydebug }
}

My filebeat source is configured to collect stacktraces into a multiline message in order to capture stack traces. This is the configuration section of filebeat.

multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: after

Here is some sample debug output from my filebeat.

2017/09/08 18:57:21.219082 client.go:214: DBG  Publish: {
  "@timestamp": "2017-09-08T18:57:16.173Z",
  "beat": {
     "hostname": "Newton.local",
     "name": "Newton.local",
     "version": "5.5.2"
   },
  "input_type": "log",
  "message": "2017-09-06 15:05:00,044 [ Session Task-2] ERROR ca.foo.FeedImportRequestMessageListener.onMessage(63):  - ### Before FeedMessageStatus.instance().close()",
  "offset": 737873,
  "source": "/bar/logs/focus.log",
  "type": "log"
}

Output from logstash correctly parses the server_time and log_level fields, but there is a _grokparsefailure tag anyway. Is this because of the multiline settings on the filebeat?

I do not get the tag if I feed the message field contents to logstash directly.

Shouldn't [^]] instead be [^\]]? Anyway, build your grok expressions gradually. Start with the simplest possible (e.g. ^%{TIMESTAMP_ISO8601:server_time}) and verify that it works. Then append additional tokens to your expression.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.