Match rule not working

Hello,

I am trying to add a threat intelligence in my SIEM,
I downloaded the database of malware hashes, and I am using auditbeat file integrity to detect malware, and then I created a rmatch rule like that:

and then I download a malware in my windows machine, and as we can see, when I filter by it's hash I can see it in auditbeat discover :

and we can also see it in my malware database:

But when the rule executes, It didn't generate any alert ! (we can see that the rule succeeded)

Could you please explain to me why this is happening ?

Best regards

Hi there @TheHunter1,

Great to see you trying out the indicator match rule type!

Your rule config looks good to me.
Here are a couple of things to check:

  1. Is your threat indicator field sha1_hash a keyword ?
  2. What is the schedule for this rule (i.e. "Runs every" and "Additional look-back time") ?

Also, you mentioned that you can see that the rule succeeded (that's good!),
could you send along a snapshot of the "Rule Monitoring" tab like this?

Out of curiosity, how many documents do you have in our bazaar-* index pattern?

Thanks!

Thanks for your answers @Mike_Paquette ,

To answer your questions:

1- Yes the threat indicator field sha1_hash is a keyword as you can see in this pic :

2- The schedule rule is Run every 5minand additional look-back: 5min (it was 1 min, and as it didn't send any alert I made it 5 min to ensure that I have no data loss, but still no alerts)

3- Here is a screenshot of the Detection Rules monitorinf section (The last one in the screenshot) :

4- In the bazaar-* index pattern, I have 274247 documents

I just ran a quick test using an index that looks similar and I was able to get matches. I just did the following in dev tools and choose one of my hashes:

PUT bazaar-00001
PUT bazaar-00001/_mapping
{
  "properties": {
    "@timestamp": {
      "type": "date"
    },
    "sha1_hash": {
      "type": "keyword"
    }
  }
}

PUT bazaar-00001/_doc/1
{
  "@timestamp": "2021-03-09T16:43:21.757Z",
  "sha1_hash": "ad2aaef284522469b03fb9c019be71e0fed70bec"
}

And then used this definition:

And I was getting hits. What do all your @timestamps in your list look like? Are they unique or identical? We currently sort in the list against the timestamp and then we pull them in sections by using a search_after when the lists begin exceeding 9k. If all the @timestamps are identical in the list, that would be a problem.

The Query time for malware_hash_bazaar looks rather almost too quick for a list that size.

Thanks for your answer @Frank_Hassanabad ,

I just checked the @timestamp in my bazaar-* index pattern, and they look almost the same (some lines different in the milisecondes: like Mar 9, 2021 @ 17:34:41.**006**:

How can I solve this problem in this case ? should I replace the timestamp by the field first_seen in the bazaar database ?

Thanks for your help

Anything which makes each @timestamp unique would be good. If @timestamp is the exact same across records, that's when you can have a miss when the size grows larger than 9k in size.

If first_seen is unique that should work out. We loop over the entire lists each time if your query is *:*.

We are having discussions of not making @timestamp mandatory for lists in the future as some people have complained about its mandatory nature and gotcha's like this. We might use something other than the search_after such as one of the live cursors but if the cursor times out in-between list items due to slow querying then we would just end up replacing one bug with another.

But I'm hopeful we can remove more gotcha's here soon.

2 Likes

Thanks for your answers @Frank_Hassanabad,

I will try to change the @timestamp and keep you updated :blush:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.