Excessive "External Alerts" after update to 7.8

Hello,

Updated to 7.8.1 last week and just noticed that we have a huge amount of external alerts now from panw.panos.

I've been working on updating event.kind to alert for mcafee and cylance logs, but that's kind of invisible currently due to panw. So how are we supposed to use external alerts? (Our Palo Alto produces so many 'alerts' that are totally ignorable)

Are the External Alerts 'meant to leave unfiltered'? Or are we supposed to use Logstash and for example remove the alert value when event.severity is 4 or lower? Or is there something I'm missing? Where is Eastic going with these External Alerts?

Sorry for the many question, i'm trying to understand where this feature is going in order to not spend time on this in the wrong way. Preferably I would like the total number of External Events to be lower and only show the more critical events, without having to configure a filter every time. Just thinking out loud, but a way to locally pin filters might help. Currently pinning filters is global? (i think) which make it more difficult to use, as SIEM / External Alerts filters are not applicable everywhere.

Also there are currently only event.category and event.module to aggregate on in the External Alerts graph. Are there any plans to add more fields, such as host.name, source.ip, destination.ip, event.dataset and others..

Tx and grtz

Willem

Hi @willemdh thanks for the great question.

First just making sure that within your ingestion processes, you are applying the value of event.kind:"alert" to only the subset of events that are actually alerts? (meaning that they are an output of some kind of external detection or rule.)

In general, we think that analysts will be most successful if they invest their triage time on "signals" rather than "external alerts." If there's a subset of external alerts (e.g., event.severity <=4) that you are interested in triaging, then creating a detection rule that creates signals for each of those would the optimal way to go.

You can create a custom query rule for each producer of external alerts, and then spend triage time only looking at signals. The signals generated from these external alerts will be right alongside your other rule-based detections, and you'll be able to sort/filter them using the same techniques, and of course, you'll have the ability to investigate them in timeline, create cases, etc. In this way, you have a common workflow for triage and investigation, regardless if the detection was done by some external system, or in your Elastic SIEM.

For example, a query-based rule for external alerts could be created for suricata alerts using the following query: event.kind:alert and event.module:suricata and event.severity <= 2

We see external alerts being useful as a high-level view of what's being ingested by your cluster, based on the event.kind:alert setting.

That said, you raise a good point about possibly adding more "stack by" options in the External Alerts histogram on the overview page, like host.name, source.ip, destination.ip,
and event.dataset.

Thanks for the feedback, and hope this helps!

@Mike_Paquette Thanks for the detailed answer.

First just making sure that within your ingestion processes, you are applying the value of
event.kind:"alert" to only the subset of events that are actually alerts? (meaning that they are an output of some kind of external detection or rule.)

About that, it is actually the panw module itself which assigns the event.kind alert value since 7.8.x

But this was 'unexpected' and results in making all our other alert events 'invisible'.

Grtz

Willem

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.