Hi,
I was using metricbeat 6.5.4 in my dev system before I upgraded to 7.3.0.
Metricbeat is not pushing directly elasticsearch, but is the probes it takes to filesystem. So filesystem is my first buffer layer for metricbeats events. Then these logs are fetched by filebeat, shipped to redis where they are fetched by logstash, which is pushing to elasticsearch.
Filebeat is simply adding a field "logType", and using this logType as redis key.
The logstash configuration is quite trivial:
input
{
redis
{
data_type => "list"
db => "${REDIS_DB}"
host => "${REDIS_HOST}"
port => "${REDIS_PORT}"
key => "metricbeat"
}
}
filter
{
json
{
id => "json"
source => "message"
}
# delete message if no _jsonparsefailure
if ("_jsonparsefailure" not in [tags])
{
mutate
{
remove_field => ['message']
}
}
}
###
# this filter is just setting the index name and setting the index rotation time
###
filter
{
mutate
{
# as suffix you can use following options: SUFFIX_WEEKLY, SUFFIX_DAILY, SUFFIX_MONTHLY
# !!! NOT USING PREFIX plx_ FOR METRICBEAT !!!!
add_field => { "[@metadata][indexName]" => "%{[logType]}-${SUFFIX_WEEKLY}" }
}
}
output
{
elasticsearch
{
hosts => ["${ES_HOST}:${ES_PORT}"]
ssl => "${USE_ES_SSL}"
cacert => "${ES_CA_CERT_PATH}"
# credentials are fetched from envrionment or logstash-keystore
user => "${LOGSTASH_USER}"
password => "${LOGSTASH_PASSWORD}"
index => "%{[@metadata][indexName]}"
}
}
After Upgrading I exported metricbeat 7.3.0 index template and added it to elasticsearch via Kibana's DevTools.
I now have these templates:
.ml-meta [.ml-meta] 0 7030099
.monitoring-alerts-7 [.monitoring-alerts-7] 0 7000199
.ml-config [.ml-config] 0 7030099
.watches [.watches*] 2147483647
.ml-notifications [.ml-notifications] 0 7030099
.data-frame-notifications-1 [.data-frame-notifications-*] 0 7030099
.watch-history-10 [.watcher-history-10*] 2147483647
.ml-anomalies- [.ml-anomalies-*] 0 7030099
metricbeat-7.3.0 [metricbeat-*] 1
.watch-history-9 [.watcher-history-9*] 2147483647
.logstash-management [.logstash] 0
.kibana_task_manager [.kibana_task_manager] 0 7030099
.management-beats [.management-beats] 0 70000
.ml-state [.ml-state*] 0 7030099
.monitoring-kibana [.monitoring-kibana-7-*] 0 7000199
plx [plx*] 0
.monitoring-beats [.monitoring-beats-7-*] 0 7000199
.monitoring-es [.monitoring-es-7-*] 0 7000199
logstash [logstash-*] 0 60001
.monitoring-logstash [.monitoring-logstash-7-*] 0 7000199
.triggered_watches [.triggered_watches*] 2147483647
.data-frame-internal-1 [.data-frame-internal-1] 0 7030099
I also deleted all metricbeat indices, so that there shouldn't be any type mismatch. All metricbeat instances are updated to 7.3.0, so no cross version.
metricbeat-Data is getting stuck in logstash:
[2019-08-12T12:36:33,468][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-2019.w33", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x6f1e9013>], :response=>{"index"=>{"_index"=>"metricbeat-2019.w33", "_type"=>"_doc", "_id"=>"xIfThWwBNHp2V5M6a9Pb", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [source] tried to parse field [source] as object, but found a concrete value"}}}}
Any Idea what is the cause of this trouble?
Thanks, Andreas