Threshold Rule for Detecting Brute force Attacks

Hi, with the release of 7.9 I tried creating a threshold rule for detecting and alerting Windows Brute Force Attacks. The query is Event.code 4625 and the threshold field is host.name greater than equal to 5.
But the rule isn't detecting anything.

1 Like

Thanks for reaching out @Ameer_Mukadam. I have some questions for you to try and help identify the problem.

Are you using Winlogbeat to ship Windows security event logs to Elasticsearch? If so, what version of Winlogbeat are you using?

If you execute the following query in Discover under the winlogbeat-* index pattern, do you see 5 or more events within the time window that you specified in the threshold rule? This is to check that the events are present in order to trigger your rule.

event.code:4625

Does your rule look similar to my example in the screenshot below?

How have you configured the schedule for your rule? In my example below, I've set the interval to 5 minutes and the additional look-back time to 1 minute.

I'll keep an eye out for your response. Thanks

Yes my rule looks exactly the same. Tuned the schedule and it started working thank you very much.

Great! I'm glad you got it working :slightly_smiling_face:

Hello Ameer,

how did you tuned the schedule? I have exactly the same problem and can't find any error.
I also use version 7.9.0

Thanks for your help.
kind regards
siiman

Hey, I didn’t do anything specific suddenly it just started working.

Just throwing this out there @siiman and @Ameer_Mukadam about at least one reason why sometimes if you tweak the scheduling things start working when before they looked like they didn't. This might/might not be an explanation but it's good to know nonetheless for the future.

When you create a rule and set the schedule and then run it, the rule by default will look at the @timestamp for its queries to detect things from the scheduler. If your agents such as beats or endgame or etc... are sending events where the @timestamp is off because that computer sending the message has clock skew you're going to start to see this type of behavior where you're sometimes not detecting things between rule runs.

The additional look back time is used to account for some clock skew by intentionally doing a small overlap along with signal de-duplication between the overlaps. We have seen people change this value and then start to see signals occurring when before they might not have.

The worst offenders of the clock skew is that some systems run different log collectors/log proxies which happen to have received a batch of logs with timestamp already set to a past time but it basically processes them to Elastic Search in bursts or periodic times and then the clock skew issue deepens for people.

A feature we have if this is the case is to utilize the timestamp override we have and choose a field that you setup server side in a pipeline processor[1] to add a date time field of when it arrives within ES such as event.ingested:

Refs:

  1. https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest-processors.html