I'm having an odd issue where some of my log events getting shipped are missing the normal fields like beats.hostname, source etc. I'm not sure where to start to troubleshoot.
I have a mix of Cento6 and Centos7 all running filebeat 1.3.1 shipping to (currently) a single server running Logstash, Elasticsearch and Kibana in docker containers.
Any ideas on where to start would be much appreciated.
my filebeat conf:
filebeat:
prospectors:
- paths:
- /var/log//_perf.log
encoding: plain
fields_under_root: false
input_type: log
document_type: perf
scan_frequency: 10s
harvester_buffer_size: 16384
tail_files: false
force_close_files: false
backoff: 1s
max_backoff: 10s
backoff_factor: 2
partial_line_waiting: 5s
max_bytes: 10485760
and logstash config (erb template):
input {
beats {
port => <%= @beats_port %>
codec => multiline {
pattern => '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate => "true"
what => "previous"
}
}
}
filter {
ruby {
code => "
fields = event['message'].scan(/\S*=\S*/)
for field in fields
if field.include? '='
field = field.split('=')
if !field[0].nil? && !field[1].nil?
field[0] = field[0].gsub('.','_')
if field[1].delete('ms').to_i.to_s == field[1].delete('ms') && field[0] != 'Event'
event[field[0]] = field[1].delete('ms').to_s
else
event[field[0]] = field[1].to_s.delete(',')
end
end
end
end
"
}
#grok { match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}"}}
#date { match => {"[@metadata][timestamp]" => "yyyy-MM-dd HH:mm:ss.SSS"}}
mutate { convert => {"FasaID" => "string" "ItemsProcessed" => "integer" "Total" => "integer" "count" => "integer"}}
}
output {
elasticsearch {
hosts => <%= @elasticsearch_hosts %>
#manage_template => false
#index => \"%{[@metadata][beat]}-%{+YYYY.MM.dd}\"
#document_type => \"%{[@metadata][type]}\"
}
#stdout { codec => rubydebug { metadata => "true" }}
}