I'm having an issue where my syslog is parsing the wrong date format. I get the following in Elastic: yyyy-11-Mo 11:37:17, while I get the following error in logstash:
[2021-11-29T11:34:43,644][WARN ][logstash.outputs.Elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pci-syslog-2021.11.29", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x1f92caa9], :response=>{"index"=>{"_index"=>"pci-syslog-2021.11.29", "_type"=>"_doc", "_id"=>"grSMbH0BNakrN0jJTJaw", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date] in document with id 'grSMbH0BNakrN0jJTJaw'. Preview of field's value: 'Nov 29 11:34:11'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [Nov 29 11:34:11] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}
I think you didn't specified any format for dateTime in the mapping, so by default elasticsearch wait to get a date with the format strict_date_optional_time or epoch_millis.
According to you and logs, the dateTime contains the value Nov 29, 2021 @ 11:37:17.627 in the logstash output and it does not match either of the two required formats.
So i think you have two possibilities.
First is to edit the mapping in elasticsearch to specify the incomming date are in syslog date format. example here
Or to use the date filter in logstash to make the change form syslog date to strict_date_optional_time directly in logsatsh.
Everything @Cad said applies, but note that the issue is the field called [timestamp], so that is the mapping that you would need to change.
syslog timestamps have no year, so logstash uses heuristics to guess the year (if today is in January and the month in the timestamp is in December then assume it is from last year and so on). I do not know if elasticsearch date parsing does the same. If not, a date filter might be a better bet than a mapping update.
Thank you both for your answers. I tried using the date filter plugin as suggested, but I still get the same result. I'm definitely doing something wrong, but I can't figure out what exactly.
I also set the same format in my template but I still get the same.
[2021-12-02T14:50:49,458][WARN ][logstash.outputs.Elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pci-syslog-2021.12.02", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x947f7c8], :response=>{"index"=>{"_index"=>"pci-syslog-2021.12.02", "_type"=>"_doc", "_id"=>"9fGyfH0BNakrN0jJ2UsE", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date] in document with id '9fGyfH0BNakrN0jJ2UsE'. Preview of field's value: 'Dec 2 14:50:05'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [Dec 2 14:50:05] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
I did try that after a few attempts, however I had the year. I just tried the way you said and I'm still getting the same error. Could it be my template that's messed up?
Is it possible that I need to allow more than one format? I'm getting syslog from linux VMs (centos 7 mostly) and from our vmware appliances (photon) coming from loginsight. We are using the loginsight codec plugin. Would that be the cause? I thought about doing seperate indices, but that's the way it was initially implemented.
Do you think it would be easier to create a new index for vmware only, or allowing multiplate format should do the trick?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.