Detection Custom Rule not working

Hi,

I'm trying to create a basic custom rule on a custom index that contains logs that originated from filebeat. creating a basic rule on that index just not working, no Signals created.

Using the same query showing results as expected in Timeline.... what I miss here :frowning:

Hi Or_Biran and welcome to the forums!

When you setup up custom rules by default they will run every 5 minutes and look back by 5 minutes. If you don't have any signals for the last 5 minutes then you are going to not see any signals.

If you adjust timeline to only look back 5 minutes and then test your queries do you see what looks like signals that should be created?

If you have access to your Kibana logs and kibana.yml you can put your deployment into logging verbose mode like so:

logging.verbose: true

And you will see a lot of information about your signal rules running in your logs and if your query is producing results.

Also, you can check your "Failure History" to see if you're seeing any timeouts or weird things going on with your signal:

Thank you for the quick reply!!

I'm on ElasticCloud, how can I view this logs?

I setup the look back as it should and nothing, I need probably just to work with the Kibana logs...

If you adjust timeline to only look back 5 minutes and then test your queries do you see what looks like signals that should be created?
Yes, but the rule don't create any Signals.

Update:

Its happens only on specific indice, other indices all good.

Any ideas?

Its happens only on specific indice, other indices all good.

Huh. What is a sample data source from the index? Does it have a @timestamp in it? We have seen a few issues where if an index does not have @timestamp it wouldn't create a signal.

  "_index": "test-2020.02.25",
  "_type": "_doc",
  "_id": "ys8QfnABCw2bzkRsudyU",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2020-02-25T20:36:28.848Z",
    "timestamp": "2020-02-25T20:36:28.848Z",
    "@version": "1",
    "input": {
      "type": "docker"
    },
    "ecs": {
      "version": "1.4.0"
    }
  },
  "fields": {
    "@timestamp": [
      "2020-02-25T20:36:28.848Z"
    ],
    "timestamp": [
      "2020-02-25T20:36:28.848Z"
    ]
  },
  "sort": [
    1582662988848
  ]
}

A sample of a json (I removed custom fields)

You can see 2 fields of timestamps in the fields object, it should be a problem?

No it shouldn't be. Do you see any errors in the error tab of your custom rule?

In this section?

Nothing... I'm "blind" currently and without Kibana logs... ElasticCloud

Not seeing errors is a good thing... :wink:

The detection engine really tries to bubble up all and every error it encounters.

I am wondering if your rule is not catching things within the last 5 minutes every time it is run? You said you test ran it within timeline but timeline will look by default through all the indexes that are configured in your advanced settings and it has defaults for those:

KibanaManagementAdvanced Settingssiem:defaultIndex

As a trouble shooting step I would create a test space (or use your default space if it does not impact others using the SIEM) and change your advanced settings to just use that particular index exactly as you have it defined in your custom signal. Test some queries within timeline that are very broad such as host: *, and then see if you can create a signal that operates with that broad query first.Regardless of it if it works or not, shut it down once done as you don't want broad wild card queries operating against your data sets.

That at least will narrow it down a bit to determine where the culprit might be. If for some reason you misnamed an index when creating a rule (for example) it will not show errors for that particular case. It will just see something does not exist and tell you, yeah there are no signals here.

Thanks again for trying to help!

I just done that and on timeline I get the results and the same query not working as detection rule, no Signals... IMHO it feels that the detection feature is buggy :\

It is a beta feature and we are taking input from people to improve it. If you don't mind could you give me your mapping like so from the dev tools of Kibana?

GET test-2020.02.25/_mapping

I am wondering if you have something different in your mapping with regards to how you have your @timestamp we were not expecting when developing the feature and we have a bug there.

From stock auditbeat, filebeat, winlogbeat, everything should be working with little to no trouble but there's a lot of permutations and combinations people are doing with it so I am hopeful to fix any bugs that are uncovered quickly.

Can I send you the mapping on email? I can't post it to a private message as the mapping is too large to send...

After some more and more debugging and trying to find whats wrong I got this error

I converted the epoch time and tried to find the relevant log but no log on that time.. lost :expressionless:

I sent you my email to send me the mapping. I am now very curious to what the mapping of your @timestamp is (among anything else) now that I see the above error.

Sent :slight_smile:

Found the solution.

It was the Index Template that caused the problems...

The indices mappings was:

"@timestamp": {
          "type": "date",
          "format": "strict_date_optional_time"
}

The "format": "strict_date_optional_time" causing problems in the Detection Rules. after removing it from the Index Template > Mappings > Advanced Options, and creating new index, all went well! @Frank_Hassanabad any thoughts?

No Kibana Logs in ElasticCloud so it was hard... just when I setup the SIEM to work only with the specific indice name just then I got the error @Frank_Hassanabad do you know why or how can I be exposed to the Kibana logs in ElasticCloud?

Thanks again for all the help!

Ah yea! Awesome.

I am going to play around with this configuration and other time based ones in a sandbox environment and see if we have some bugs around there.

I do not know for sure about getting the Kibana logs from cloud at the moment. I asked around and some people are going to get back to me about it.

So I played around with different mappings and I got down to these set of small steps to replicate the UI errors you're seeing on the front end:

Using the detection engine, I ran it backwards with a very long look back time and I was able to generate signals from that particular mapping with the timestamps that particular way. However on your system it might not have been able to find signals every 5 minutes looking backwards with that date time mapping.

I am not for sure yet if the detection engine cannot find signals based on different time stamps.

I will try a few more permutations of custom date time fields to see if any others are causing issues front end or with detection engine, but there is definitely a UI bug with date times and how we query Elastic Search from within SIEM we need to investigate more.

So really appreciate you finding it.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.