How to change the default type

Hi everyone, when I index the flow data to ES, the IPV4_SRC_ADDR and IPV4_DST_ADDR are string type, and I want to change it to ip type.
this is the mapping config:

PUT /_template/logstash
{
"index_patterns": "logstash-*", 
"order": 1, 
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"default": { 

"properties": {
"IPV4_SRC_ADDR": { "type": "ip"},
"IPV4_DST_ADDR": { "type": "ip"},
"PROTOCOL": { "type": "integer"}
}
}
}
} 

there has a error message:
Rejecting mapping update to as the final mapping would have more than 1 type: [log, doc]

thank you in advance!

That will probably depend on your Logstash config, it looks like something is setting document_type to log.

HI, this is my logstash config:

input{

        tcp{
                host => "172.30.154.74"
                port => 5510
                codec => json
        }
}
filter{
         mutate{
                remove_field =>["SRC_IP_COUNTRY","SRC_IP_LOCATION","DST_IP_LOCATION","TCP_FLAGS","SERVER_NW_LATENCY_MS","NTOPNG_INSTANCE_NAME","LAST_SWITCHED","INTERFACE","DST_IP_LOCATION","DST_IP_COUNTRY","CLIENT_NW_LATENCY_MS","IN_PKTS","OUT_PKTS","IN_SRC_MAC","OUT_SRC_MAC","OUT_DST_MAC","FIRST_SWITCHED","host","ntop_timestamp","port","json","L4_DST_PORT","L4_SRC_PORT","@version","_type"]
                }

}
output{
elasticsearch {
                codec => "json"
                hosts => ["172.30.124.254:9200","172.30.451.52:9200"]

        }
                stdout{codec => rubydebug}

}

after set the template, then the ES can't store the data anymore

Is this the only Logstash instance talking to your cluster?

yes .
this is the logstash error messgae:

[2018-03-23T16:34:26,924][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.03.23", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x751a88a7>], :response=>{"index"=>{"_index"=>"logstash-2018.03.23", "_type"=>"doc", "_id"=>"jEb-UWIBJRgB5JfYn-uk", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update tgB5JfYn-uk", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update to [logstash-2018.03.23] as the final mapping would have more than 1 type: [default, doc]"}}}}

the problem didn't happen in version ES 5.X.

the problem is solved after modifing the mapping template config:

PUT _template/logstash
{
  "index_patterns": ["logstash-*"],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "doc": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "IPV4_SRC_ADDR": { "type": "ip"},
        "IPV4_DST_ADDR": { "type": "ip"},
        "PROTOCOL": { "type": "integer"}
      }
    }
  }
}

I have another question that what value of "number_of_shards" is better for my cluster which has four nodes and large data in it.

thank you very much.

Are you using daily indices? If so, how large are they?

yes, about 20G per day.

Then I suspect 2 primary shards would be suitable. With 1 replica configured that gives 4 total shards per index, which matches your node count.

ok. I got it! thanks a lot :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.