Rejected mapping because final mapping would have more than one type

Hi,

I just migrated my ES from AWS elasticservice to a EC2 instance and despite not having changed anything to my configs or logstash (running on a separate instance) my mapping is suddenly getting rejected.

[logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-netflow-test-2018.07", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x262ee34b>], :response=>{"index"=>{"_index"=>"logstash-netflow-test-2018.07", "_type"=>"doc", "_id"=>"uv2dnWEBlrOJlly6qpiU", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update to [logstash-netflow-test-2018.07] as the final mapping would have more than 1 type: [netflow, doc]

At no point am I defining the type as doc.

Logstash output

output {
if [type] == "netflow" {
    elasticsearch {
        hosts => ["http://myip:9200"]
        index => "logstash-netflow-test-%{+xxxx.ww}"
        }
    }
}

And the mapping

PUT _template/netflow
{
    "index_patterns" : ["logstash-netflow-test*"],
    "settings" : {
        "number_of_shards" : 1,
		"number_of_replicas" : 0
    },
    "mappings": {
      "netflow": {
        "properties": {
          "@ingest_time": {
            "type": "date"
          },
          "@timestamp": {
            "type": "date"
          },
          "Location": {
            "type": "keyword",
            "ignore_above": 256
          },
          "application_id": {
            "type": "keyword",
            "ignore_above": 256
          },
          "application_id_trans": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "dst_addr": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "dst_addr_host": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "dst_port": {
            "type": "long"
          },
          "dst_port_trans": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "gateway": {
            "type": "keyword",
            "ignore_above": 256
          },
          "in_bytes": {
            "type": "long"
          },
          "out_bytes": {
            "type": "long"
          },
          "protocol": {
            "type": "long"
          },
          "src_addr": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "src_port": {
            "type": "long"
          },
          "type": {
            "type": "keyword",
            "ignore_above": 256
              }
            }
          }
        }
      }

It gets even stranger when I remove http:// from the output, then the error changes to this

[logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-netflow-test-2018.06", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x4cf5057e>], :response=>{"index"=>{"_index"=>"logstash-netflow-test-2018.06", "_type"=>"doc", "_id"=>"MP2UnWEBlrOJlly6bSdp", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Mixing up field types: class org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType != class org.elasticsearch.index.mapper.KeywordFieldMapper$KeywordFieldType on field Location"

What am I doing wrong?

ES 6.2

Tested with the default mapping and data gets processed like this

type: netflow
_type: doc

But if I add a "_type" field to my mapping it complains about there being two type fields despite it not being the same field name.

How come the default dynamic template can process data as a type and _type field and not my mapping?

I'm setting the type field in logstash to help ID my data so what should I do? I could do some awkward rename but I can't imagine that being necessary.

The default type for the Elasticsearch output plugin is 'doc' for Elasticsearch 6.x clusters, which is clashing with the 'netflow' type you have specified in the mapping. I suspect setting document_type => "netflow" in the Elasticsearch output should resolve the issue.

Hi Christian,

Thanks, I will try that. But since types will be removed all together what is the recommended way forward in the future to "id" different types of data from Logstash to Elastic? Should tags be used instead?

You can add a new keyword field to each document and use this to indicate the type. You can then filter on this field, e.g. through saved searches in Kibana.

Thanks.

I figured changing netflow to doc and then just add the type field to my mapping works as well.

"mappings": {
  "doc": {
    "properties": {

The type => netflow functionality in the logstash input will not be removed, right?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.