If the date filter fails to parse a timestamp it'll log a message with additional details. Unless... the timestamp comes from a jdbc input, then you might be running into another problem.
As you can see, everything is getting parsed correctly. And still I am seeing _dateparsefailure. Also the field type is being reported as text and not date.
One more strange thing, @timestamp in the above json shows date as 24 whereas normally, in kibana, it shows June 25th 2018, 01:17:19.913.
I find that hard to believe. But I see that your subdata_proc_date ends with a space. The date filter is pretty picky so you probably have to include that space in the date pattern or you'll have to remove the space.
@magnusbaeck I am curious that why is Grok debugger not showing that space. I just tested the existing grok pattern and that field does not contain a space.
@magnusbaeck I am not sure how but space disappeared. But despite using date filter, it is getting reported as text.
I am facing this issue since 3 days. None of my date filters are reporting the field correctly. All are getting reported as text only. And yes, the date filter is exactly the same as I've posted at the beginning with the question.
This is what I am seeing in json - "subdata_proc_date": "2018-06-08T13:54:36.342Z But the type is still text.
An ES index's mapping of a field can't ever change. You need to reindex to get a new mapping. Once that's done (you can use ES's get mapping API to verify that the mapping matches the expectations) you probably need to refresh the field list in Kibana.
@magnusbaeck Agreed. But what if I've set a new index on daily basis. In other words, my indices are of the default filebeat format.
Does that mean I'd have to reindex daily at least at once in order to make the text field as a date?
Because I've been doing this for 3 days. I'll explain what I've been doing from last 3 days:
Every day a new index is generated in format - filebeat-version-yyyy-mm-dd
Every day, I've to create a new index by specifying the date fields and their format. Once that is done I am reindexing the old index data into the new one and delete the older one.
I am not sure if this is the best procedure to do this. Is there any better way you can suggest?
A default Elasticsearch setup will index a string containing "2018-06-08T13:54:36.342Z" as a timestamp. If that doesn't happen for you I'm not sure what's up. Maybe an index template that disables dynamic mappings or something?
To rule out some factors, try using ES's REST API to create a new index (e.g. filebeat-whatever-2018-06-30) containing a single document:
Also, I've kept ES and logstash config pretty much as it is.
Looks like I've found something in mapping "date_detection": false. Would the default date_detection mechanism detect this timestamp or I'd need to specify the format separately?
I am not if this is due to new changes from ES 6.x or this is the intended behavior but I am not able to change the date_detection to true. However, setting the types for my date fields in a custom template and applying them to filebeat-* indexes worked.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.