Kibana SIEM Function: Failed to Parse Date field? (Epoch Time)

Hi All,

Really need some help here. I am trying to use the Kibana SIEM Function; in particular the network function.

I have mapped all my fields to ECS standard but for some reason it is returning 'Data Fetch Failure' and returns "[failed to parse date field [1595751850086] with format [strict_date_optional_time]]"

The only field that I currently have that has an epoch time is called "time_of_log". (There are two time fields: @timestamp represents when the logs was ingested and time_of_log represents when the log was made). To convert the epoch time frame I used the logstash data plugin to make sure the indices realise its format. This method works in the Discover Section but Kibana SIEM will not work?

This is a snippet of my Logstash Pipeline

This is the Kibana SIEM function:

As you can tell the map function is working, but it won't parse any of the data?

I'm thinking that this is related to an outstanding issue we have had with date times here:

You can go through the links and discussion items from there for some workarounds but this sounds like the same issue. Which is that we are using the default date time mapping from the index when we should be doing the same thing that Discover does and push down the date time mapping we are using for date ranges during our queries.

Cheers @Frank_Hassanabad, I presume and hope therefore that this issue will be fixed in the next roll out: Version 7.9?

I had a very similar issue last week.
My UNIX timestamp was parsed but it was either parsed with a grokparsefailure or my Kibana simply didn't show me the field as a Date, only as the literal UNIX Timestamp string.

Oddly enough the pipeline that works for me looks almost exactly like yours, except that I use "UNIX" instead of "UNIX_MS"
Have you already tried it that way?

date { match => [ "applicationDate", "UNIX"] target => ["applicationDate"] }

Yep, it has been merged and we are very hopeful this is going to clear up all of these date time problems.

FWIW, how you can tell when it is going to land in the next upcoming release is to look to see if it was backported into the 7.x branch. When we get near a release the 7.x branches to a 7.9 and then 7.x becomes the candidate for the next release.

Here we can see that it has been back-ported to 7.x before the 7.9 branch was cut:

And then I can double check one of the lines of code within 7.9 if I want to double check:

and it does look like the feature is going to make it into 7.9 :slight_smile:

@Frank_Hassanabad Thank you for doing the in-depth check. I really hope it does fix it; hope to use the anomaly detection!

Did that fix it for you then? My epoch time is in UNIX_MS so either way date plugin wouldn't accept it if I set it to UNIX?

Hi @Frank_Hassanabad, I realised that in each log, when looking at the JSON, there is a field at the bottom called :

"sort": [
    1595963620550
  ]

I think this is the issue, but I presume it cannot be removed because Elastic uses it to index?

Shouldn't be that I know of.

That's just an epoch being sent as part of the sort, but if the index being queried against is a non-beats index and cannot use epoch then that's the problem we have been seeing from users. The fix in the code base is to send down an explicit format with the date time ranges to tell ES which type of date time to utilize rather than relying on the underlying index to have an expected date time format.

Basically in the date ranges we changed them to utilize format from here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-range-query.html#range-query-field-params

so that we could send the ranges in whichever format we want and not rely on the underlying index field's date range which can be different across custom indexes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.