Hi,
I have been using Logstash for a while now and when upgrading to version 8 I can see the @timestamp field format was changed from milliseconds percision to microseconds percision (meaning instead of 2022-07-28T09:46:06.200Z we are getting 2022-07-28T09:46:06.200000Z) .
I have many indices which map @timestamp field with milliseconds format and now i'm getting many indexing errors due to @timestamp field.
i'm able to workaround this (and change it to milliseconds percision) by overriding the field using:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [@timestamp] of type [date] in document with id 'XXX'. Preview of field's value: '2022-07-28T08:09:38.000000Z'"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [@timestamp] of type [date] in document with id 'XXX'. Preview of field's value: '2022-07-28T08:09:38.000000Z'",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "failed to parse date field [2022-07-28T08:09:38.000000Z] with format [yyyy-MM-dd'T'HH:mm:ss.SSSX]",
"caused_by" : {
"type" : "date_time_parse_exception",
"reason" : "date_time_parse_exception: Text '2022-07-28T08:09:38.000000Z' could not be parsed at index 23"
}
}
},
"status" : 400
}
I guess originally we could have just used date and let Elastic infer the format but this is already applied for many indices and current mapping can not be changed without reindexing.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.