Date_time parsing error in Elastic Stack 7.0. Worked fine in 6.7

Hello,

Bit of a strange problem. I was using Elastic Stack 6.7 and have recently created a new 7.0 cluster.

From Filebeat I feed the below log line into both of these versions of Logstash

Log line:

[INFO ] [2019-06-13 00:00:00,711] [SP6-akka.actor.default-dispatcher-15] [akka.tcp://SP6@192.201.101.225:2551/user/BP2MeterManufacturer.File.Pickup/singleton/file-pickup] Next poll in 10 seconds

Here is my logstash conf file - except for the output IP address, it is identical on both of my Elastic Stack instances.

input {
  beats {
    port => 5044
  }
}

filter {
   grok {
     match => {
        "message" => "%{LOGLEVEL:log.level}\s*\] \[%{TIMESTAMP_ISO8601:log.datetime}\]%{GREEDYDATA:log.message}"
         }
        }

    date {
      match => [ "log.datetime", "YYYY-MM-dd HH:mm:ss,SSS" ]
    }

}

output {
  elasticsearch {
    hosts => "192.248.48.31:9200"
    manage_template => false
    index => "mflow-%{+YYYY.MM.dd}"
    document_type => "mflow"
    }
}

In Elastic Stack 6.7 the log file is successfully parsed by Logstash and I can see the output in Elasticsearch. Below is the index info from Elasticsearch. We can see that it successfully created an index of 12 records (there are 12 lines in the log file).

green open mflow-2019.06.13 JOv2398RQaab2RYu6EehqA 3 1 12 0 151.8kb 75.9kb

However, in Elastic Stack 7.0 I receive the below parsing error in Logstash for the same log line that was parsed successfully in v6.7:

[2019-06-13T13:23:17,587][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"mflow-2019.06.13", :_type=>"mflow", :routing=>nil}, #<LogStash::Event:0xba49170>], :response=>{"index"=>{"_index"=>"mflow-2019.06.13", "_type"=>"mflow", "_id"=>"PhoBUWsBDHqLRRnGvp7M", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [log.datetime] of type [date] in document with id 'PhoBUWsBDHqLRRnGvp7M'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2019-06-13 00:00:00,711] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}

Can anyone suggest what the problem may be here?

Thanks,
Paul

Even I am facing the same issue after upgrading to 7.0

None of the formats documented seem to allow a space between the date and time, they all ask for a literal 'T'. I wonder if this change resulted in the stricter interpretation. Either way, it is a question about elasticsearch, not logstash.

Well in my case, I have even supplied the format in the field definition itself.

[2019-06-18T19:12:32,923][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"vm-autopatrol-status-2019.06.18", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x7e64a599>], :response=>{"index"=>{"_index"=>"vm-autopatrol-status-2019.06.18", "_type"=>"_doc", "_id"=>"PR0BbGsBHgo23YjuR8BO", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [ts] of type [date] in document with id 'PR0BbGsBHgo23YjuR8BO'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2019-06-03T18:00:52.432-04:00] with format [yyyy-MM-dd HH:mm:ss|| date_hour_minute_second_fraction||yyyy-MM-dd ||epoch_millis||epoch_second || date_hour_minute_second_millis || basic_date_time || yyyy-MM-dd'T'HH:mm:ss.SSS'Z' || yyyy-MM-dd'T'HH:mm:ss.SSSZ || MM/dd/yyyy HH:mm:ss a]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}

If you look at yyyy-MM-dd'T'HH:mm:ss.SSS'Z' format, it satisfies for 2019-06-03T18:00:52.432-04:00. Don't know what the issue is.

That would be the format with a literal 'Z'. You do have a format

yyyy-MM-dd'T'HH:mm:ss.SSSZ

but that would match 2019-06-03T18:00:52.432-0400... Can you try adding

yyyy-MM-dd'T'HH:mm:ss.SSSZZ

Added the aforementioned format to the mappings, its still not able to parse.

[2019-06-18T21:03:58,260][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"vm-autopatrol-status-2019.06.18", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x661901bc>], :response=>{"index"=>{"_index"=>"vm-autopatrol-status-2019.06.18", "_type"=>"_doc", "_id"=>"qhpnbGsBWba35rWfTJvI", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [ts] of type [date] in document with id 'qhpnbGsBWba35rWfTJvI'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2019-04-15T16:26:37.716-04:00] with format [yyyy-MM-dd HH:mm:ss|| date_hour_minute_second_fraction||yyyy-MM-dd ||epoch_millis||epoch_second || date_hour_minute_second_millis || basic_date_time || yyyy-MM-dd'T'HH:mm:ss.SSS'Z' || MM/dd/yyyy HH:mm:ss a || yyyy-MM-dd'T'HH:mm:ss.SSSZZ]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}

Looks like there is a stricter interpretation in place.

Thanks

Found the solution, used this yyyy-MM-dd'T'HH:mm:ss.SSSXXX date format.

Edit:

I have now resolved my issue. I had to reindex in Elastic search.

Previously the index template contained this section:

log{
properties{
...
"datetime": {
"type": "date"
}
}
}

Adding the format field to the index template and redeploying the template resolved the issue.

log{
properties{
...
"datetime": {
"type": "date"
"format":"yyyy-MM-dd HH:mm:ss,SSS"
}
}
}

Thanks for all comments.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.