Logstash Date Parsing Error - How to Address

Hello,

I had a situation where some of our indexes were not being discovered in kibana. The problem turned out to be that grok parser was using %{TIMESTAMP_ISO8601:logDateTime} for log timestamp field, and kibana required %{TIMESTAMP_ISO8601:logTimestamp}. Once I changed this some of the additional indices were being discovered.

However, another type were not because I was getting a _dataparsefailure in logstash on the corresponding messages incoming from filebeat.

I examined what was different between the working timestamp and the failing timestamp:

A working timestamp: 2019-05-02 22:08:30.022
A failing timestamp: 2019-05-02 22:08:30

I did a test to make the corresponding input file (input file to filebeat) have the identical format as the ones there were working. My problem went away.

Here is my question. Why does this difference (logDateTime versus logTimestamp) even matter? Isn't this part of the field in the grok filter just a label?

Thanks for any insights!

logTimestamp vs. logDateTime is just a label. But elasticsearch has expectations about what the format of a field is. If the mapping is configured to expect milliseconds on logTimestamp then it may refuse to index a document that does not have milliseconds. This will show up in the logstash log as a mapping exception.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.