Mapping fields from logstash

hello, colleagues,

We got a problem with logstash events. we can't save it in elasticsearch when we use ruby filter
version elasticsearch is 5.2.2
logstash config

...
    filter {
        json {
            source => "message"
        }
        ruby {
              code => "
                event.get('metrics').each { |kv|
                event.set(kv['name'], kv['value'])
                }
                "
          }
    }
output {
  stdout {
    codec => rubydebug
  }
  elasticsearch{
  }
  }

error message in logstash logs

"status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [heap] of different type, current_type [float], merged_type [ObjectMapper]"}

but in old version (2.3.5), it is works with similar config

...
filter {
  json {
      source => "message"
      }
ruby {
        code => "
          hash = event['metrics']
            hash.each { |kv|
            event[kv['name'].gsub('.','_')] = kv['value']
            }
            "
            }
...

why it is not work on version 5.2.2 ? Is there another possible way to do the same?

I have moved this one over to logstash, where it seems more suitable.

It looks as if your even now is somehow merged with another dataset creating a JSON object instead of a float for the heap field - is that field existing already somehow?

In order to debug this further it would make a lot of sense to also supply a sample document, so the logstash developers can reproduce this issue.

i tried it in the elasticsearch in container started without mapping volumes. it should was clean.
sample document
{"name":"test-api","metrics":[{"name":"mem","value":738891.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"mem.free","value":478694.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"heap.committed","value":598528.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"heap.init","value":786432.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"heap.used","value":119833.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"heap","value":699392.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"nonheap.committed","value":143424.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"nonheap.init","value":2496.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"nonheap.used","value":140363.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"nonheap","value":0.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"threads.peak","value":115.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"threads.daemon","value":100.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"threads.totalStarted","value":1370.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"threads","value":114.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gc.ps_scavenge.count","value":1102.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gc.ps_scavenge.time","value":11410.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gc.ps_marksweep.count","value":3.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gc.ps_marksweep.time","value":2099.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gauge.servo.response.templates","value":405.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gauge.servo.rest.min","value":0.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gauge.servo.rest.max","value":0.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gauge.servo.response.health","value":1.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"gauge.servo.counter.span.accepted","value":1.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"httpsessions.max","value":-1.0,"timestamp":"2017-06-06T20:10:05.065Z"},{"name":"httpsessions.active","value":0.0,"timestamp":"2017-06-06T20:10:05.065Z"}],"createdTime":"2017-06-06T17:10:05.065Z","properties":{}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.