Help with match date 2019-12-04 06:34:33.850000+00:00

I have date values like this 2019-12-04 06:34:33.850000+00:00

I have trouble describing the format on "match". So far the closet I came up is "yyyy-mm-dd hh:mm:ss.sss"

should I use:
timezone => GMT
locale - en-us

Thanks in advance

I think "yyyy-MM-dd HH:mm:ss.SSSSSSZ" would work. No need to specify a timezone since it is included in the date.

Badger, thanks a lot for the quick response. I works for almost 90% of the cases but for example for '2019-10-19 18:59:17.212000+00:00' throws "illegal_argument_exception"

failed to parse date field with format [strict_date_optional_time || epoch_millis]

Do you have any ideas?

logstash throws an exception? Or logstash gets a 400 from elasticsearch and elasticsearch throws the exception?

Badger thanks a lot for your guidance, here is the exception.

[2019-12-09T15:32:17,019][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"slap_msgs", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x3134d74d], :response=>{"index"=>{"_index"=>"slap_msgs", "_type"=>"_doc", "_id"=>"5epc7G4BD6qqC3DZ0bRR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [msg_time] of type [date] in document with id '5epc7G4BD6qqC3DZ0bRR'. Preview of field's value: '2019-10-17 21:04:39.244000+00:00'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2019-10-17 21:04:39.244000+00:00] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}

elasticsearch expects the [log_time] field to be a date. It is arriving as a string, and the default parser only understands strict_date_optional_time and epoch_millis. I assume the field that contains 2019-10-19 18:59:17.212000+00:00 is [log_time]. You are parsing that using a date filter. If you are using the default target then that sets @timestamp but does not modify [log_time], in which case I would expect that error all of the time.

If you are using target => "log_time" to modify [log_time] then it is going to get a mapping exception when the date parser fails.

There are several approaches you could take. You could get rid of the date filter and define a custom parser in elasticsearch that can parse that text format.

Personally I would probably check for a _dateparsefailure and then use mutate+rename to rename log_time to failed_log_time.

Badger, I will follow your directions, thanks again for getting back to us.

One final questions is:

Here is the .conf file, we are reading a csv file.
---- Start
input {
file {
path => <my_csv_file>
sincedb_path => <my_file>
start_position => "beginning"
}
}

filter {
csv {
separator => ","
columns => ["index","slap_id","msg_time","diag_gw","diag_ingesttime","diag_conductortime","diag_uuid","diag_rssi","diag_snr","diag_sf","diag_channel","site_id"]
remove_field => ["host","path","message"]
}

date {
match => ["msg_time", "yyyy-mm-dd hh:mm:ss.SSSSSSZ"]
target => "msg_time"
}

type convert number fields

mutate {convert => ["index", "integer"]}
mutate {convert => ["diag_rssi", "integer"]}
mutate {convert => ["diag_snr", "integer"]}
mutate {convert => ["diag_sf", "integer"]}
mutate {convert => ["diag_channel", "integer"]}
}

output {
elasticsearch{
hosts => <my_url_host>
user => <my_elastic_user_id>
password => <my_elastic_password>
index => <my_index_name>
}
stdout {}
}
---- End

We are ingesting about 21,000,000 rows and the process fails for a few thousand rows. It would be great to understand why if fails for a few thousand rows. If I understand you correctly it should fail for all the rows, or did I get it wrong?

Case matters. hh is the hour of the half day, so only valid from 1 to 12. mm is minute, not month.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.