Since some time logstash cannot accept the events from other hosts. It just fills the disk with logs of:
[2018-07-12T01:14:05,241][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2018.07.11", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x48a8f43], :response=>{"index"=>{"_index"=>"filebeat-2018.07.11", "_type"=>"doc", "_id"=>"RXGfi2QB4MOEB4eqhmGr", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}
Known problem with 6.3 of the Elastic stack. Search for "object mapping for [host] tried to parse field [host] as object, but found a concrete value" in the archives for details.
I guess the problem appeared since we added a new node and installed 6.3.x version of filebeat.
Yesterday I upgraded all the filebeats in the infrastructure, but the problem is still there.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.