Detection rule for password spraying attempts

Hi,

we have a use case where we want to detect if a host tries to log on to a certain number of different users. With threshold rules we are only able to detect a specific number of login attempts by a host. We cannot ensure that these login attempts are for different users, because we can only aggregate on one value.

I would appreciate ideas on how to achieve this?

Hey there @heading -- thanks for joining the community! :slightly_smiling_face:

After chatting with some of the protections folks internally I think your best bet for this sort of detection rule (until we add support for multi-field thresholds) would be to leverage a custom ML Job + Rule.

Something like the following should do the trick, but please be sure to double check the fields to make sure they fit within your configuration (result index, time_field/format, etc).

If not familiar with our ML functionality, you can create the job via Machine Learning -> Anomaly Detection -> Create job -> Select index pattern -> Advanced -> Edit JSON and paste in the following for the Job configuration JSON:


{
  "description" : "description",
  "analysis_config" : {
    "bucket_span":"15m",
   "detectors": [
      {
        "detector_description": "high_non_zero_count by \"user.name\" partitionfield=\"host.name\"",
        "function": "high_non_zero_count",
        "by_field_name": "user.name",
        "partition_field_name": "host.name",
        "detector_index": 0
      }
    ],
    "influencers": [
      "host.name"
    ]
  },
  "results_index_name": "password-spray-host-username",
  "data_description" : {
    "time_field":"@timestamp",
    "time_format": "epoch_ms"
  }
  , "groups": ["security"]
}

Ensuring you provide "groups": ["security"] will make the job available within the Security app so you could then create a Detection Rule that would then create alerts for any anomalies greater than a specified anomaly score generated by this job.

e.g.

If you have a lot of services exposed to the open internet, and a lot of failed auth this will be a tad noisy, so you could add a numeric threshold like below which would require a large delta in the event count before producing an anomaly. More on that here in the docs.

{
  "conditions": [
    {
      "applies_to": "actual",
      "operator": "lt",
      "value": 100
    }
  ]
}

Hopefully this is helpful and gets you moving in the right direction, but please do let us know if you have any questions. :slightly_smiling_face:

Cheers!
Garrett

1 Like

Thanks @spong. I'll give it a try.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.