The detection engine really tries to bubble up all and every error it encounters.
I am wondering if your rule is not catching things within the last 5 minutes every time it is run? You said you test ran it within timeline but timeline will look by default through all the indexes that are configured in your advanced settings and it has defaults for those:
As a trouble shooting step I would create a test space (or use your default space if it does not impact others using the SIEM) and change your advanced settings to just use that particular index exactly as you have it defined in your custom signal. Test some queries within timeline that are very broad such as host: *, and then see if you can create a signal that operates with that broad query first.Regardless of it if it works or not, shut it down once done as you don't want broad wild card queries operating against your data sets.
That at least will narrow it down a bit to determine where the culprit might be. If for some reason you misnamed an index when creating a rule (for example) it will not show errors for that particular case. It will just see something does not exist and tell you, yeah there are no signals here.
It is a beta feature and we are taking input from people to improve it. If you don't mind could you give me your mapping like so from the dev tools of Kibana?
I am wondering if you have something different in your mapping with regards to how you have your @timestamp we were not expecting when developing the feature and we have a bug there.
From stock auditbeat, filebeat, winlogbeat, everything should be working with little to no trouble but there's a lot of permutations and combinations people are doing with it so I am hopeful to fix any bugs that are uncovered quickly.
The "format": "strict_date_optional_time" causing problems in the Detection Rules. after removing it from the Index Template > Mappings > Advanced Options, and creating new index, all went well! @Frank_Hassanabad any thoughts?
No Kibana Logs in ElasticCloud so it was hard... just when I setup the SIEM to work only with the specific indice name just then I got the error @Frank_Hassanabad do you know why or how can I be exposed to the Kibana logs in ElasticCloud?
So I played around with different mappings and I got down to these set of small steps to replicate the UI errors you're seeing on the front end:
Using the detection engine, I ran it backwards with a very long look back time and I was able to generate signals from that particular mapping with the timestamps that particular way. However on your system it might not have been able to find signals every 5 minutes looking backwards with that date time mapping.
I am not for sure yet if the detection engine cannot find signals based on different time stamps.
I will try a few more permutations of custom date time fields to see if any others are causing issues front end or with detection engine, but there is definitely a UI bug with date times and how we query Elastic Search from within SIEM we need to investigate more.