Grok filter is not working

Hello Guys,
I have Filebeat installed in a Windows server, this Filebeat send data from files in this Windows server to server with Logstash, here are the files config:

logstash.conf, be careful I did cut the match filter because is very long.

input {
  beats {
     type => beats
     port => 9000
     tags => ["windows","Hadoop"]
  }
}
filter{
   if [ type] == 'beats' {
     grok{
        match => { "message" => "%{TIMESTAMP_ISO8601:EventTime} %{GREEDYDATA:jobname} "}
         }
   }
}
output {
   if [type] == "beats" and "_grokparsefailure" in [tags] {
        file { path => "/var/log/hadoop-failed-%{+YYYY-MM-dd}"}
   }
   elasticsearch {
     hosts => ["serverELK:9200"]
     codec => json_lines
   }
}

Filebeat.yml:

filebeat:
  paths:
     - E:\Hadoop\*.bcp
  encoding: utf-8
  input_type: log

output:
logstash:
    hosts: ["cviaddzl02:9000"]

I recieved as well the data in Elasticsearch, so I dont have problem with this config.
The problem is Grok filter is not works because I recieved the data in a single field named message, here is the picture in Kibana:

Here I show you one line with ascii code visble from file sent:

61179392^I38358^I23028^I23028^I""^I""^I11831^I1379023^I1636937^I1664738^I242611^I35032^I24892^I24892^I""^I""^I17539^I0^I0^I199^I0^I"

Every string value is in double cuote and is separated by TAB.
Please help me to find what is wrong with my Grok filter.

Thank you.

Since you're not deleting the message field it'll still be passed on to Kibana. If you're extracting the parts of that field into separate fields you might as well delete the original field

grok {
  match => { "message" => "..." }
  remove_field => ["message"]
}
1 Like

Hello Magnus,
I did, but it doesn't work.

This time added the stdout { codec => rubydebug }:

Sending logstash logs to /var/log/logstash/logstash.log.
{
       "message" => "\"2016-04-01 00:00:00\"\t\"SnowmobileProd_panel_ltt_agg_mid-7053179-65182-5845-5934-1\"\t\"job_201601281440_5099978\"\t\"ngua_admin\"\t\"NORMAL\"\t\"SUCCESS\"",
      "@version" => "1",
    "@timestamp" => "2016-06-01T19:58:22.744Z",
          "beat" => {
        "hostname" => "host1",
            "name" => "host2"
    },
         "count" => 1,
        "fields" => nil,
    "input_type" => "log",
        "offset" => 0,
        "source" => "E:\\Hadoop\\prueba.bcp",
          "type" => "log",
          "host" => "CVIADDBA08",
          "tags" => [
        [0] "windows",
        [1] "Hadoop",
        [2] "beats_input_codec_plain_applied"
    ]
}

I tested the grok in grok debugger http://grokdebug.herokuapp.com/ and everything looks good.

The new losgstash.conf:

input {
     beats {
        type => beats
        port => 9000
        tags => ["windows","Hadoop"]
    }
}

filter {
        if [type] == "beats" {
                grok {
                        patterns_dir => ["/etc/logstash/conf.d/patterns/custom"]
                        match => { "message" => " %{TIMESTAMP_ISO8601:EventTime}\"\t\"%{GREEDYDATA:jobname}\"\t\"%{GREEDYDATA:jobid}" }
                        remove_field => ["message"]
                }
        }
}

output {
   if [type] == "beats" and "_grokparsefailure" in [tags] {
        file { path => "/var/log/hadoop-failed-%{+YYYY-MM-dd}" }
   }
   elasticsearch {
     hosts => ["hostELK:9200"]
     codec => json_lines
   }
   stdout { codec => rubydebug }
}

Thank you.

In your grok expression you have a space before the ISO8601 timestamp that isn't present in the string you want to match. If that doesn't help, start with the smallest possible expression (%{TIMESTAMP_ISO8601:EventTime}) and make sure that works. Then add new tokens at the end of the expression until things stop working. Then you'll find what part of the expression that's problematic.