New filebeat-related log error

In trying to get all filebeat fields to show, I did something (and I don't know what it is) that now produces the following in logstash-plain.log:

[2018-07-26T12:45:12,299][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2018.07.26", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x49c83803>], :response=>{"index"=>{"_index"=>"filebeat-2018.07.26", "_type"=>"doc", "_id"=>"yKl612QBKgNxLMIo4nXn", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

and this in elasticsearch.log:

[2018-07-26T13:44:02,905][DEBUG][o.e.a.b.TransportShardBulkAction] [filebeat-2018.07.26][3] failed to execute bulk item (index) BulkShardRequest [[filebeat-2018.07.26][3]] containing [index {[filebeat-2018.07.26][doc][iKmw12QBKgNxLMIowrVZ], source[{"prospector":{"type":"log"},"source":"e:\\lm\\tclweb\\htdocs\\utilities\\debug\\logs\\log80_error","beat":{"name":"MK1","version":"6.2.1","hostname":"MK1"},"type":"log","tags":["log","beats_input_codec_plain_applied"],"@timestamp":"2018-07-26T17:43:53.954Z","host":"MK1","message":" [26/Jul:13:43:47] iocp12132 Error 408 / {} /","@version":"1","offset":403625996}]}]

The file e:\lm\tclweb\htdocs\utilities\debug\logs\log80_error comes from a windows server running filebeat. So, it seems as though my Windows servers with filebeat are no longer shipping messages.

Grrrr. I seem to be making things worse and worse. How do I fix the issue(s) creating these log entries?


Hello @diggy,

I think you are hitting of our breaking change in the 6.3.0 release. We have created documentation for this problem and workaround.

Thank you! I'll take a look.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.