I'm trying to create a basic custom rule on a custom index that contains logs that originated from filebeat. creating a basic rule on that index just not working, no Signals created.
Using the same query showing results as expected in Timeline.... what I miss here
When you setup up custom rules by default they will run every 5 minutes and look back by 5 minutes. If you don't have any signals for the last 5 minutes then you are going to not see any signals.
If you adjust timeline to only look back 5 minutes and then test your queries do you see what looks like signals that should be created?
If you have access to your Kibana logs and kibana.yml you can put your deployment into logging verbose mode like so:
logging.verbose: true
And you will see a lot of information about your signal rules running in your logs and if your query is producing results.
Also, you can check your "Failure History" to see if you're seeing any timeouts or weird things going on with your signal:
I setup the look back as it should and nothing, I need probably just to work with the Kibana logs...
If you adjust timeline to only look back 5 minutes and then test your queries do you see what looks like signals that should be created? Yes, but the rule don't create any Signals.
Its happens only on specific indice, other indices all good.
Huh. What is a sample data source from the index? Does it have a @timestamp in it? We have seen a few issues where if an index does not have @timestamp it wouldn't create a signal.
The detection engine really tries to bubble up all and every error it encounters.
I am wondering if your rule is not catching things within the last 5 minutes every time it is run? You said you test ran it within timeline but timeline will look by default through all the indexes that are configured in your advanced settings and it has defaults for those:
As a trouble shooting step I would create a test space (or use your default space if it does not impact others using the SIEM) and change your advanced settings to just use that particular index exactly as you have it defined in your custom signal. Test some queries within timeline that are very broad such as host: *, and then see if you can create a signal that operates with that broad query first.Regardless of it if it works or not, shut it down once done as you don't want broad wild card queries operating against your data sets.
That at least will narrow it down a bit to determine where the culprit might be. If for some reason you misnamed an index when creating a rule (for example) it will not show errors for that particular case. It will just see something does not exist and tell you, yeah there are no signals here.
I just done that and on timeline I get the results and the same query not working as detection rule, no Signals... IMHO it feels that the detection feature is buggy :\
It is a beta feature and we are taking input from people to improve it. If you don't mind could you give me your mapping like so from the dev tools of Kibana?
GET test-2020.02.25/_mapping
I am wondering if you have something different in your mapping with regards to how you have your @timestamp we were not expecting when developing the feature and we have a bug there.
From stock auditbeat, filebeat, winlogbeat, everything should be working with little to no trouble but there's a lot of permutations and combinations people are doing with it so I am hopeful to fix any bugs that are uncovered quickly.
I sent you my email to send me the mapping. I am now very curious to what the mapping of your @timestamp is (among anything else) now that I see the above error.
The "format": "strict_date_optional_time" causing problems in the Detection Rules. after removing it from the Index Template > Mappings > Advanced Options, and creating new index, all went well! @Frank_Hassanabad any thoughts?
No Kibana Logs in ElasticCloud so it was hard... just when I setup the SIEM to work only with the specific indice name just then I got the error @Frank_Hassanabad do you know why or how can I be exposed to the Kibana logs in ElasticCloud?
So I played around with different mappings and I got down to these set of small steps to replicate the UI errors you're seeing on the front end:
Using the detection engine, I ran it backwards with a very long look back time and I was able to generate signals from that particular mapping with the timestamps that particular way. However on your system it might not have been able to find signals every 5 minutes looking backwards with that date time mapping.
I am not for sure yet if the detection engine cannot find signals based on different time stamps.
I will try a few more permutations of custom date time fields to see if any others are causing issues front end or with detection engine, but there is definitely a UI bug with date times and how we query Elastic Search from within SIEM we need to investigate more.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.