As the title says, I'm looking to overwrite ingest timestamp to reflect the actual timestamp pulled from log data. I've already got a custom regex that can pull this timestamp out of my logs as its own unique field, but want to be able to sort logs by the metadata timestamp in kibana.
Now, from my research I feel I should just replace "timestamp" with [@metadata][timestamp] to overwrite but it doesn't want to work. Instead, the timestamp displayed in kibana continues to display the timestamp the logs were injested. The above grok filter works just fine as it correctly displays the logs timestamp via the pattern_definitions field.
For example, my grok pattern will extract the following timestamp from the original logs:
[10/06/21 09:33:32.777.00]
Is it the square brackets that cause date match to break down? Trying to find documentation specifically on date match and date target is leading me down many dead ended rabbit holes. Any help is appreciated
Far as I can tell its just an extreme microsecond. Max that the date filter can work with is a thousandths of a second. From the documentation my understanding is that it should match and would just append zeros
This is a terrible timestamp to try to work with on an old service, but I've no control over the format unfortunately. Ultimately, I just want to be able to sort data in kibana by the timestamp associated with a log entry, not the ingest time that logs were parsed by filebeat / logstash as is default.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.