Merging dynamic updates triggered a conflict

I'm using elasticsearch 2.1 and logstash 2.1 to parse logs.

One log line in particular always throws an error in logstash.

:response=>{
"create"=>{
  "_index"=>"logstash-2015.12.15",
  "_type"=>"logs", 
  "_id"=>"AVGnCkWh8dsIxdcksmWF", 
  "status"=>400, 
  "error"=>{
    "type"=>"mapper_parsing_exception", 
    "reason"=>"Merging dynamic updates triggered a conflict: mapper [additional.attrsCallback(NG-SOHWR0B1)] of different type, current_type [long], merged_type [double]"

Here is the log (as json)

{
  "severity":2,
  "timeStamp":"2015/12/15 19:08:08.087Z",
  "server":"foobar.example.com",
  "caller":"foobar.Search.Timings",
  "id":{
    "type":"timing event",
    "id":"0000000111112222",
    "class":"foobar.Search",
    "function":"Dispose",
    "thread":94},
    "message":"Query timings",
    "additional":{
      "file":"c:\\foo\bar.cs",
      "line":1416,
      "Validators":15.6253,
      "get customFields":[15.6253,
      0],
      "attrsCallback(NG-SOHWR0B1)":[0,15.6253],
      "Parse Query":0,
      "Network request http://10.10.10.10:8900/solr/xxxxxxxx/select":31.2498,
      "generate query XXXXXXX":0,
      "ndSolrQueryGeneratorFTI.generateQuery MD":0,
      "getServerAndCores":0,
      "process results":0
      }
  }
}

To work around the issue I'm trying to use a mutate filter in logstash to convert the following field to a float

  "attrsCallback(XX-YYYYY)":[0,15.6253],
filter {
    mutate {
      convert => [  "[additional][attrsCAllback(*)]", "float"]
    }
}

Unfortunately this does not work as I still see the same error in the logstash logs.

Any advice how I can convert the line to something that elasticsearch wont choke on? I'd rather use logstash to mutate the field, rather than creating a custom elasticsearch mapping since it is much easier to deploy.

But the problem is that the existing mapping is long and you're trying to post a document with a double, so converting the value in Logstash to a float won't address the problem. Pick a data type and stick to it. If you want it to be a floating-point field you'll have to reindex the existing data.

Good point. The mapping will be updated with a new index right? Since logstash indicies are created every day, then tomorrows index should be a "float". I assume the problem will go away with the new index?

Hopefully, but you should really specify the mapping explicitly to make sure.