Cannot be changed from type [float] to [long]


(Dennis) #1

Hi,

I've got an issue with logstash/Elasticsearch...

I'm getting the messages in my logstash log:

[2018-09-07T02:00:01,438][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.09.07", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x6f2e913c>], :response=>{"index"=>{"_index"=>"logstash-2018.09.07", "_type"=>"doc", "_id"=>"Tg9UsWUBAQ_cKmP3D0NX", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.process.cpu.total.pct] cannot be changed from type [long] to [float]"}}}}

I'm getting it for the following metrics:

system.process.cpu.total.pct
system.core.irq.pct
system.load.1
system.diskio.iostat.write.request.per_sec
system.diskio.iostat.queue.avg_size

The issue appears to have happened on the index rollover @ 12:00am.

My setup has been running well for over 2 weeks but this is a surprise. It basically means that I'm not receiving any of the above process metrics which I need to be able to get process CPU usage on my application (I was due to give a talk today to my manager/team about how good ES is and show all the work I've been doing - but I've had to postpone until next week now). Although you can see the values in Kibana, you can't graph them. Like they aren't being seen by ES (similar to a field bigger than 1024 characters for example).

I've read a couple of similar topics but no one gives any details on how they resolved it. just a note on how they had to manually update their mappings. Why do you have to do this when nothing on the setup side has changed? If anyone has steps on how to do this I would be greatful to follow them.

Anyway, I'm using metricbeat outputing to a logfile, then filebeat takes that logfile and passes to logstash, then some minor parsing happens before sending onto ES. I'm happy to share my config files on request.

If i start a new index, the issue goes away (but then failed on the next new index!) and the metrics all come in ok, but that isn't a fix as it could go wrong again so I need to resolve this so I can use my original index again.

Can anyone help?

Versions:
metricbeat version 6.3.2 (amd64), libbeat 6.3.2
filebeat version 6.1.3 (amd64), libbeat 6.1.3
logstash 6.1.1
elasticsearch-6.3.2

Regards


(Dennis) #2

So further testing and I'm more confused.

I've tried to create a few indexes and not make any other changes. Indexes are just created in Logstash during the output section.

The first new index worked with no issues.
The second new index had the same issue as before (missing cpu information)

Why is there a difference when I'm using the same data? why does one work and not the other? it seems very odd and I have no idea how to resolve it or stop it happening in the future.

Here's my logstash configuration.

input {
  beats {
    port => 5044
  }
}

filter {
  json {
    source => "message"
  }
  grok {
    match => { "[system][process][cmdline]" => ".*\s-Dname=%{WORD:system.process.appname}\s.*" }
  }
}

# Below added due to a change in ES version 6.3.2
# https://discuss.elastic.co/t/logstash-errors-after-upgrading-to-filebeat-6-3-0/135984/28
filter {
  mutate {
    remove_field => [ "[host]" ]
  }
  mutate {
    add_field => {
      "host" => "%{[beat][hostname]}"
    }
  }
}

#filter {
#  mutate {
#    convert => { "[system][process][cpu][total][pct]" => "float" }
#    convert => { "[system][core][irq][pct]" => "scaled_float" }
#    convert => { "[system][load][1]" => "float" }
#    convert => { "[system][diskio][iostat][write][request][per_sec]" => "scaled_float" }
#    convert => { "[system][diskio][iostat][queue][avg_size]" => "scaled_float" }
#  }
#}

output {
  if "application" in [tags] {
    elasticsearch {
       hosts => "http://server1.xmp.net.intra:9200"
       index => 'application1-%{+YYYY.MM.dd}'
    }
  } else {
      elasticsearch {
        hosts => "http://server1.xmp.net.intra:9200"
#        index => 'logstash-%{+YYYY.MM.dd}'
        index => 'logstash-json-%{+YYYY.MM.dd}'
#        index => 'metrics-logstash-%{+YYYY.MM.dd}'
      }
   }
}

It's the logstash index after the else above.


(Dennis) #3

More updates... :slight_smile:

Looking in Kibana at the field, I get this in JSON

"cpu": {
  "total": {
    "norm": {
      "pct": 0.0001

But in the Table view, I get 0 - when it should also say 0.0001

|#  system.process.cpu.total.norm.pct|       |0|
|---|---|---|
|#  system.process.cpu.total.pct|       |0.003|

When i try and graph it I get 0 which isn't helpful.

Is the "type" being lost? How can i get it back? I think this could be an elasticsearch issue and not logstash. Anyone?

Thanks.


(Magnus Bäck) #4

If you want to make sure that a field is mapped in a particular way you should use an index template. ES picks the data type of a field based on the first occurrence of the field. Without explicitly set mappings, ES's automapper will let the data type of the first document with a particular field set the precedent. In other words, it seems like system.process.cpu.total.pct sometimes contains an integer, and when the day's first document with that field happens to be an integer you'll run into trouble later on.


(Dennis) #5

Thanks for your reply @magnusbaeck , I suspected that's what happened.

Do you know how I can ask for a feature request to make it so metricbeat never sends any value that could be a float as an integer. If this could be resolved at source, me, along with many others wouldn't have to cater for this in our configuration (unless I'm the only one... :slight_smile:

Thanks


(Magnus Bäck) #6

If nobody has already filed such an issue (https://github.com/elastic/beats/issues) you can file one yourself.


(Dennis) #7

Looks like it has already been raised:


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.