Grok Filter Works With 5.4.0 ELK Stack, Not 6.2.4 on Windows Server 2012

I have created a grok filter in a logstash.json file shown here:

input {
beats {
port => 5044
type => "log"
}
}

filter {

Ignore the comments that IIS will add to the start of the W3C logs

if [message] =~ "^#" {
drop {}
}

grok {
## Very helpful site for building these statements:
# http://grokdebug.herokuapp.com/
#
# This is configured to parse out every field of IIS's W3C format when
# every field is included in the logs
#
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]
}

Set the Event Timesteamp from the log

date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Etc/UTC"
}

If the log record has a value for 'bytesSent', then add a new field

to the event that converts it to kilobytes

if [bytesSent] {
ruby {
code => "event.set('kilobytesSent', (event.get('bytesSent').to_i / 1024.0))"
}
}

Do the same conversion for the bytes received value

if [bytesReceived] {
ruby {
code => "event.set('kilobytesReceived', (event.get('bytesReceived').to_i / 1024.0))"
}
}

Perform some mutations on the records to prep them for Elastic

mutate {
## Convert some fields from strings to integers
#
convert => ["bytesSent", "integer"]
convert => ["bytesReceived", "integer"]
convert => ["timetaken", "integer"]

## Create a new field for the reverse DNS lookup below
#
add_field => { "clientHostname" => "%{clientIP}" }

## Finally remove the original log_timestamp field since the event will
#   have the proper date on it
#
remove_field => [ "log_timestamp"]

}

Do a reverse lookup on the client IP to get their hostname.

dns {
## Now that we've copied the clientIP into a new field we can
# simply replace it here using a reverse lookup
#
action => "replace"
reverse => ["clientHostname"]
}

Identify location of client accessing site

geoip {
source => "clientIP"
}

Parse out the user agent

useragent {
    source=> "useragent"
    prefix=> "browser"
}

}

We're only going to output these records to Elasticsearch so configure

that.

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

When I view in Kibana 5.4.0 from ElasticSearch 5.4.0, it parses the IIS message with geo as well. I installed the 6.2.4 ELK stack and used the same logstash.json file. I receive the messages in Kibana via ElasticSearch 6.2.4, but it does not parse the IIS message and under tags I get the dreaded _grokparsefailure, _geoip_lookup_failure. I am thinking some of the syntax may not be supported.

I did attempt to send output to command line and it looks like logstash parsed the data. Any help to walk me thru further troubleshooting would be greatly appreciated.

Thank you,

Bob

I have created a grok filter in a logstash.json file shown here:

If you tell Logstash 6 to read all config files in a directory it will only pick up files with names matching *.conf.

Also, we can't help without seeing an example event. Use a stdout { codec => rubydebug } output to dump a raw event.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.