Use of Indicator Match Rules using Cross Cluster Search

Hello!

We have set up an infrastrucure of SIEM using Elastic Security and we are using multiple elasticsearch clusters distributed in different locations (multiples datacenters) and we have configured CCS (Cross Cluster Search) to have a single pane of glass of dashboards and detection rules as explained by Aaron Jewitt in the blog Elastic on Elastic: Configuring the Security app to use Cross Cluster Search | Elastic Blog.

Recently we started on integrating our SIEM with Threat Intel using MISP and we noted that the type of rules that we need to use is the Indicator Match type, however we faced that the use of Indicator match rule type is not supported by Cross Cluster Search feature as mentioned in the following document Detections and alerts | Elastic Security Solution [8.6] | Elastic.

Now, we would like to know if there's a workaroud to use Indicator match rule type in this scenario. According to the Aaron's blog mentioned above, the infosec team at Elastic uses this topology but he didn't mention how to configure this kind of rule.

Can anyone help us with this scenario?

Thanks in advance!

1 Like

Hi José, while CCS isn't officially supported with indicator match rules due to possible performance issues we have found on the infosec team that if you specify a specific remote clusters instead of using a broad wildcard you can use them without a noticeable performance impact. For example, instead of using *:filebeat-* as your index pattern you should use remote_cluster_name:filebeat-*. Here is an example of how we use them.

We have a remote cluster that we call our infosec_data_warehouse which contains all of our asset inventory data. This is information about entities such as hosts, users, applications, etc. This data warehouse is updated daily and has information from a lot of different sources such as gsuite, aws, okta, gcp, workday, and many more. We can create indicator match rules using this remote cluster as our indicator index pattern. A use case is to create alerts on our alerts to elevate the severity of an alert based on the work role of a user.

In this example we are creating a new detection rule that will let us know when an alert triggered where the user has admin privileges of our Google Workspace.

  • We set the source index pattern to be the local .siem-signals-* index pattern that contains all of our existing alerts with a wildcard query to select all existing alerts.
  • We then set the indicator index pattern to our data warehouse remote cluster and index pattern.
  • In the indicator index query we narrow our search down to only documents where the user is an admin of gsuite.
  • We then use the indicator mapping fields to generate a new alert whenever the user.email value from an alert matching the user.email value from the data warehouse index.

4 Likes

Hi Aaron, thank you so much for you answer!

I think I understood your use case. This generated a doubt about how indicator match rules works.

When the rule is executed, how does it compares the values existing in the indicator index with the source index?

The rule somewhat send all values present in the the source index fields in the query as a filter or it get all values present in the indicator index using match all query? This got me a little bit confused.

Is there any way to execute this rule in the remote cluster? Using both the index source e index indicator in the remote cluster? My intention is to minimize the throughput in the network during those rules execution, considering a high volume of data stored in elasticsearch (hundreds of TB).

Thanks in advance again!

When the query runs it will first query the 'Indicator Index' using the query you provided. Using all of the documents returned by that query it will then take the field you specified and it will put all of those indicators together in a 'OR' query and run that against the source index.

For example, if your Indicator index query is for IP addresses and returns 1.1.1.1, 2.2.2.2, 3.3.3.3, 4.4.4.4, and 5.5.5.5 It will then take those and query your source index with source.ip: ("1.1.1.1" OR "2.2.2.2" OR "3.3.3.3" OR "4.4.4.4" OR "5.5.5.5"). I'm sure you can imagine how these queries could be resource intensive if you include hundreds or thousands of indicators or you are querying several remote clusters.

To limit the query size and performance impact we add time constraints to our IOCs. For example, in the indicator index query we will usually add @timestamp > "now-7d/d" to only use IOCs that have been added in the last 7d.

1 Like

A way to do this is to configure the threat index and detection rule on the remote cluster, and then on your central cluster you can configure the .siem-signals* index patterns as part of the CCS config. This would let your detection rules run on the remote clusters but you could see the alerts from your central cluster in the *:.siem-signals* index pattern. This is a bit more complex and adds some management overhead, but it would let you run the detections completely on the remote cluster while being able to see the results on a central cluster.

1 Like

Hello Aaron, thank you again for your answer, it really helped me understand how these rules work.

The solution to execute the detection rule completely on the remote cluster sounds a great option for our use case, however when trying to set it up to homologate, I do not find how can I change the default data view for the Alerts in the central cluster in a way that we can see the remote alerts. I have searched in the documentation and did not find any way.

We are using the elasticsearch on version 8.4.0 and the alert index is the .alerts-security.alerts-default. I've tried to add it to the Security Data View but it did not work.

How can I change this Data View in the central cluster to add the *:.alerts-security.alerts-default index pattern?

The CCS is already configured and I can query this index through dev tools, as you can see on the image below.

Thanks again in advance!

Hi @jcruz, you can't directly get the alerts from one cluster to show up in another cluster's UI. A workaround for this is to create a separate detection rule on your central cluster that will look at the remote cluster's alerts index patterns and then create a new alert on the central cluster. When you do this you can configure the rule to rename the alerts using the original kibana.alert.rule.name so they show up with the original name in your SIEM.

There are some downsides to this in that you can't take Fleet based response actions such as OSQuery or Host isolation directly from these alerts. To do that you will need to go to the cluster that is running fleet.

1 Like

Thank you @Aaron_Jewitt, I've tried and it worked greatly. Despite the downsides, it helped us to know the possibilities at this moment.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.