Mapper_Parsing_Exception After Logstash Rollback

We recently rolled back Logstash from 6.5.4 to 6.4.1 due to issues with the geoip plugin. Since then I've seen the below log lines in logstash, the value being mapped changes but it's always the same field. How do I resolve this? We are running Elasticsearch/Kibana 6.5.4 still, if that matters.

[2019-01-14T11:56:48,122][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"winlogbeat-6.4.1-2019.01.14", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x60f3c3d3>], :response=>{"index"=>{"_index"=>"winlogbeat-6.4.1-2019.01.14", "_type"=>"doc", "_id"=>"mLOCTWgBPHqyIdtlP_Ms", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [event_data.param1] of type [date]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: \"Remote Registry\""}}}}}

I see in the index mapping that [event_data][param1] has two mappings configured. [event_data] has mappings that go from line 57 to line 2,124. On line 1958:

"param1": {
  "type": "date"
},

and then again on line 2,405:

"param1": {
  "type": "text",
  "fields": {
    "keyword": {
      "type": "keyword",
      "ignore_above": 256
    }
  }
},

It appears the earlier mapping is incorrect. How do I remove just this mapping without effecting the entire index template and how did this happen? My logstash pipeline has manage_template set to false on the Elasticsearch output, is this the reason this is happening, should it be set to true to prevent this from happening?

Used the hammer approach, removed the manage_template setting, set template_overwrite => true and then changed index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" to include a 1 after the date to force the generation of a new index and new mappings. The resulting mappings file was reduced from 2,440 lines to 1,632. Not getting mapping errors in the logs anymore.

I'm thinking I set manage_template back during initial implementation when I didn't really know what I was doing and it finally came back and bit me. About a weeks worth of unknown logs lost...makes a good argument for getting DLQ configured.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.