Dynamic-Template Errors After Upgrading to ES 6.8

Evening ES. I'm having some trouble successfully indexing certain events after migrating / upgrading to ES 6.8 from 5.3. I believe the root cause of the error I'm experiencing is due to an improper mapping on one of the dynamic templates created by our previous ES administrator.

Both the error and template are below. Please bear with me as I have next to no Elastic experience beyond what I've done to migrate our on-prem cluster to AWS ES and upgrade to 6.8 from 5.3.

Please let me know if I can provide any additional information to assist.

Logstash is spitting out the following error while indexing certain documents:

[2019-09-23T19:46:13,918][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.09", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x6aba739e>], :response=>{"index"=>{"_index"=>"logstash-2019.09", "_type"=>"doc", "_id"=>"AW1gu6dtBNlnaFfEigoR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to find type parsed [string] for [level]"}}}}

Template in question:

{
  "logstash" : {
    "order" : 0,
    "version" : 60001,
    "index_patterns" : [
      "logstash-*"
    ],
    "settings" : {
      "index" : {
        "refresh_interval" : "5s"
      }
    },
    "mappings" : {
      "_default_" : {
        "dynamic_templates" : [
          {
            "message_field" : {
              "path_match" : "message",
              "mapping" : {
                "norms" : false,
                "type" : "text"
              },
              "match_mapping_type" : "string"
            }
          },
          {
            "string_fields" : {
              "mapping" : {
                "norms" : false,
                "type" : "text",
                "fields" : {
                  "keyword" : {
                    "ignore_above" : 256,
                    "type" : "keyword"
                  }
                }
              },
              "match_mapping_type" : "string",
              "match" : "*"
            }
          }
        ],
        "properties" : {
          "@timestamp" : {
            "type" : "date"
          },
          "geoip" : {
            "dynamic" : true,
            "properties" : {
              "ip" : {
                "type" : "ip"
              },
              "latitude" : {
                "type" : "half_float"
              },
              "location" : {
                "type" : "geo_point"
              },
              "longitude" : {
                "type" : "half_float"
              }
            }
          },
          "@version" : {
            "type" : "keyword"
          }
        }
      }
    },
    "aliases" : { }

Posts I've been referencing:

Bumping this up to the top for extra eyes. I believe I understand the root cause of the issue being the "string" type has been deprecated according to the Elastic blog post linked below. I used the mapping API to retrieve the mapping of our Logstash index using GET /logstash-2019.09/_mapping. The only two places where the "string" type is used are posted below.

Digging through the rest of the index mapping for the other values I'm getting mapper_parsing_exception errors on seems to show me that the values causing errors are mapped properly using the "keyword" type.

Is there any way to up the character limit on these posts? I'd like to post the full index mapping. Unfortunately though, the full map is over 7,000 lines long. Which I suspect is a problem in of itself.

Also for what it is worth I started our ES migration by uploading existing data from our on-prem deployment (5.3) to our new Amazon ES deployment (6.8) by using the Snapshot/Restore API. I am now trying to index live events using Logstash 6.8. Could this be part of the problem? Is it possible I need to re-index the existing data in some way?

GET /logstash-2019.09/_mapping

Mapping Snippet with "string"

            "string_fields" : {
              "match" : "*",
              "match_mapping_type" : "string",
              "mapping" : {
                "fielddata" : {
                  "format" : "disabled"
                },
                "fields" : {
                  "raw" : {
                    "ignore_above" : 256,
                    "index" : "not_analyzed",
                    "type" : "string"

Sample Mapping Snippets:

There are 19 occurrences of this same mapping for "syslog_text" all of them are defined the same. 

[2019-09-23T19:53:02,394][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.09", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x3fa39351>], :response=>{"index"=>{"_index"=>"logstash-2019.09", "_type"=>"doc", "_id"=>"AW1gweMirQfrb52qfo0M", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to find type parsed [string] for [syslog_text]"}}}}

          },
          "syslog_text" : {
            "type" : "text",
            "norms" : false,
            "fields" : {
              "raw" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
There are 18 occurrences of this same mapping for "path" all of them are defined the same. 

[2019-09-23T19:53:02,395][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.09", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x701eb47d>], :response=>{"index"=>{"_index"=>"logstash-2019.09", "_type"=>"doc", "_id"=>"AW1gweMirQfrb52qfo0O", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to find type parsed [string] for [path]"}}}}

          "path" : {
            "type" : "text",
            "norms" : false,
            "fields" : {
              "raw" : {
                "type" : "keyword",
                "ignore_above" : 256

Hi @elasticTrouble

You can use Gist or pastebin or... to share bigger content.

I think it's better to make a remote reindex, you can set a new clean template and mapping on your 6.x (destination server) and start reindexing your data from there.

@gabriel_tessier Thanks for the advice on using gist or pastebin. I'm not sure why I didn't think of those in the first place. Unfortunately since I'm using AWS ES (Hosted Service not EC2 Instances) I do not believe the reindex from remote API is available to me or else I would have started down that path long ago.

Thankfully I was able to solve my issue by deleting the latest logstash index from my cluster and restarting the flow of events from logstash. I believe the root cause of my issue was initially feeding every document from logstash into the logstash index on accident.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.