We rolled over to new indexes yesterday, and this is the first time we've rolled over after upgrading to ES6. Immediately we started getting exceptions from logstash and missing data with errors like this:
[2017-12-01T09:40:09,385][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"864b96d9-ac21-46f5-b166-3e0cd7930ea0", :_index=>"iis_log_prod-2017.12", :_type=>"iis_log_entry", :_routing=>nil}, 2017-12-01T09:40:07.848Z z49os2swb016 %{message}], :response=>{"index"=>{"_index"=>"iis_log_prod-2017.12", "_type"=>"iis_log_entry", "_id"=>"864b96d9-ac21-46f5-b166-3e0cd7930ea0", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to find type parsed [string] for [log_timestamp]"}}}}
We use a lot of dynamic templates, and the one failing above is using this template:
iis_log": {
"order": 0,
"index_patterns": [
"iis_log*"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"iis_log_entry": {
"properties": {
"win32response": {
"type": "integer"
},
"@timestamp": {
"type": "date"
},
"timetaken": {
"type": "integer"
},
"serverIP": {
"type": "ip"
},
"response": {
"type": "integer"
},
"clientIP": {
"type": "ip"
},
"subresponse": {
"type": "integer"
},
"port": {
"type": "integer"
}
},
"dynamic_templates": [
{
"notanalyzed": {
"match_mapping_type": "string",
"mapping": {
"index": "not_analyzed",
"type": "string"
},
"match": "*"
}
}
]
}
},
"aliases": {}
}
We were originally running Logstash 5.6, but upgraded to 6.0 to see if that would help.
I don't understand what I can do in order to get my log pipeline working again.