Hi,
I use following workstream:
metricbeat for gathering metrics -> using file output to buffer on disk -> transfer the log via filebeat to logstash -> do further enrichment -> send to elasticsearch.
Here is my filebeat prospector configuration for the metricbeat log:
# metricbeat probes
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/metricbeat/metricbeat_probes.log
encoding: utf-8
ignore_older: 24h
document_type: metricsets
scan_frequency: 10s
symlinks: true
# json.message_key: message
json.keys_under_root: true
here is an example log line of the metricbeat log:
{"@timestamp":"2017-02-27T10:55:14.839Z","beat":{"hostname":"xxxx","name":"xxxx","version":"5.2.1"},"hostName":"xxxx","metricset":{"module":"system","name":"process","rtt":25517},"serverType":"control","stage":"Production","system":{"process":{"cmdline":"/usr/share/metricbeat/bin/metricbeat -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.data /var/lib/metricbeat -path.logs /var/log/metricbeat","cpu":{"start_time":"2017-02-27T10:04:43.000Z","total":{"pct":0.001300}},"fd":{"limit":{"hard":4096,"soft":1024},"open":7},"memory":{"rss":{"bytes":12972032,"pct":0.001600},"share":5967872,"size":499920896},"name":"metricbeat","pgid":28560,"pid":28560,"ppid":1,"state":"sleeping","username":"root"}},"type":"metricsets"}
Now I encountered the issue, that the @timestamp is not correct. Some processing time is used.
There are no error tags on the event, just the normal "beats_input_raw_event".
Any Ideas? What is my mistake?
Thanks, Andreas