Hello Guys,
I have Filebeat installed in a Windows server, this Filebeat send data from files in this Windows server to server with Logstash, here are the files config:
logstash.conf, be careful I did cut the match filter because is very long.
input {
beats {
type => beats
port => 9000
tags => ["windows","Hadoop"]
}
}
filter{
if [ type] == 'beats' {
grok{
match => { "message" => "%{TIMESTAMP_ISO8601:EventTime} %{GREEDYDATA:jobname} "}
}
}
}
output {
if [type] == "beats" and "_grokparsefailure" in [tags] {
file { path => "/var/log/hadoop-failed-%{+YYYY-MM-dd}"}
}
elasticsearch {
hosts => ["serverELK:9200"]
codec => json_lines
}
}
Filebeat.yml:
filebeat:
paths:
- E:\Hadoop\*.bcp
encoding: utf-8
input_type: log
output:
logstash:
hosts: ["cviaddzl02:9000"]
I recieved as well the data in Elasticsearch, so I dont have problem with this config.
The problem is Grok filter is not works because I recieved the data in a single field named message, here is the picture in Kibana:
Here I show you one line with ascii code visble from file sent:
61179392^I38358^I23028^I23028^I""^I""^I11831^I1379023^I1636937^I1664738^I242611^I35032^I24892^I24892^I""^I""^I17539^I0^I0^I199^I0^I"
Every string value is in double cuote and is separated by TAB.
Please help me to find what is wrong with my Grok filter.
Thank you.