Unrecognized @timestamp value type=class org.jruby.RubyFloat


(Akash Kumar Dutta) #1

The configuration I am using for logstash is:-

kafka {
    type => "docker-logs"
    codec => "json"
    bootstrap_servers => "kafka-0.kafka-svc.ns-adutta.svc.cluster.local:9093,kafka-1.kafka-svc.ns-adutta.svc.cluster.local:9093,kafka-2.kafka-svc.ns-adutta.svc.cluster.local:9093"
    topics => "ops.docker-logs-fluentbit.stream.json.001"
    group_id => "id-1"
  }

output {

  elasticsearch {

    hosts => "https://abc:9243/"
    index => "%{type}"
    user => "xx"
    password => "xxxx"
    manage_template => false
    cacert => "/etc/ssl/certs.pep"          

  }

When I view the logs stored in kafka, they are like -

 {"@timestamp":1528095163.505740, "PRIORITY":"6", "_PID":"32540", "_UID":"0", "_GID":"0", "_COMM":"dockerd-current", "_EXE":"/usr/bin/dockerd-current", "_CMDLINE":"/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2", "_CAP_EFFECTIVE":"1fffffffff", "_SYSTEMD_CGROUP":"/systemd/system.slice", "_BOOT_ID":"7a898ed05d7f42778854fb43e4768eb1", "_MACHINE_ID":"482abce829884469bbc877cfa2188a6b", "_HOSTNAME":"abc.com", "SYSLOG_FACILITY":"3", "_TRANSPORT":"stdout", "SYSLOG_IDENTIFIER":"dockerd-current", "_SYSTEMD_UNIT":"docker.service", "MESSAGE":"time=\"2018-06-04T02:52:43.505340908-04:00\" level=warning msg=\"containerd: unable to save b25bfdf19b19ed63185ae63c9d977e386e70c866496ae82e88f006fc39de562d:4fb29752586c0951406e47d83db75a4cfbc2fc821955497b3877974a9968f436 starttime: open /proc/10424/stat: no such file or directory\""}

The problem is my logstash instance continuously shows me warning as -

[2018-06-04T07:14:34,574][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2018-06-04T07:14:34,575][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2018-06-04T07:14:35,002][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2018-06-04T07:14:35,002][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2018-06-04T07:14:35,002][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2018-06-04T07:14:35,002][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2018-06-04T07:14:35,002][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat

The kibana shows a tag attached to the log as - _timestampparsefailure.
Any help is appreciated!
logstash version used - 6.2.4


(Akash Kumar Dutta) #2

As I have known, the issue seems the UNIX based timestamp. The input codec (json) throws this error if it's not able to parse the timestamp as written here -

If the parsed data contains a @timestamp field, we will try to use it for the event’s @timestamp, if the parsing fails, the field will be renamed to _@timestamp and the event will be tagged with a _timestampparsefailure.

Any idea how I remove this warning by changing the value of timestamp field before the parsing so that the parser recognizes it as a valid input?


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.