Hello,
ELK stack in version 7.9.2 it has been working fine with filebeat 6.4, but after upgrading filebeat to 7.0 I get the following error in logstash logs :
[2021-02-24T15:24:56,954][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash_roll_alias", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x50f1b482>], :response=>{"index"=>{"_index"=>"logstash_roll_alias-002766", "_type"=>"_doc", "_id"=>"DrVt1HcBeFTUpuC4uvc2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [agent] of type [text] in document with id 'DrVt1HcBeFTUpuC4uvc2'. Preview of field's value: '{hostname=srv0103, id=e4433e1c-9e88-449d-955e-70eb8e32abcf, type=filebeat, ephemeral_id=e11a40e7-266b-424c-aad8-0e60e8c6012b, version=7.0.0}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:10"}}}}}
Anyone knows what the reasons and fix for this may be?
Thanks in advance!
In elasticsearch a field can be text, or it can be an object, but it cannot be a string on some documents and an object on others. It seems that filebeat 6.4 populated [agent] as a string, but 7.0 (which introduced ECS) populated it as an object with [hostname], [id], etc. sub-fields.
If you are running with daily indexes it will just start working at midnight UTC when you roll over to a new index. If not to have to choose whether to re-index all of the old data with [agent] changed to be an object, or mutate all the new data so that [agent] is a string.
unfortunately yes, upgraded only one beat as it is a production cluster and can't upgrade them all without having some tests before as some other issues may come up on the way.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.