Really need some help here. I am trying to use the Kibana SIEM Function; in particular the network function.
I have mapped all my fields to ECS standard but for some reason it is returning 'Data Fetch Failure' and returns "[failed to parse date field [1595751850086] with format [strict_date_optional_time]]"
The only field that I currently have that has an epoch time is called "time_of_log". (There are two time fields: @timestamp represents when the logs was ingested and time_of_log represents when the log was made). To convert the epoch time frame I used the logstash data plugin to make sure the indices realise its format. This method works in the Discover Section but Kibana SIEM will not work?
I'm thinking that this is related to an outstanding issue we have had with date times here:
You can go through the links and discussion items from there for some workarounds but this sounds like the same issue. Which is that we are using the default date time mapping from the index when we should be doing the same thing that Discover does and push down the date time mapping we are using for date ranges during our queries.
I had a very similar issue last week.
My UNIX timestamp was parsed but it was either parsed with a grokparsefailure or my Kibana simply didn't show me the field as a Date, only as the literal UNIX Timestamp string.
Oddly enough the pipeline that works for me looks almost exactly like yours, except that I use "UNIX" instead of "UNIX_MS"
Have you already tried it that way?
date { match => [ "applicationDate", "UNIX"] target => ["applicationDate"] }
Yep, it has been merged and we are very hopeful this is going to clear up all of these date time problems.
FWIW, how you can tell when it is going to land in the next upcoming release is to look to see if it was backported into the 7.x branch. When we get near a release the 7.x branches to a 7.9 and then 7.x becomes the candidate for the next release.
Here we can see that it has been back-ported to 7.x before the 7.9 branch was cut:
And then I can double check one of the lines of code within 7.9 if I want to double check:
and it does look like the feature is going to make it into 7.9
That's just an epoch being sent as part of the sort, but if the index being queried against is a non-beats index and cannot use epoch then that's the problem we have been seeing from users. The fix in the code base is to send down an explicit format with the date time ranges to tell ES which type of date time to utilize rather than relying on the underlying index to have an expected date time format.
so that we could send the ranges in whichever format we want and not rely on the underlying index field's date range which can be different across custom indexes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.