Unable to use SIEM module

Hi All,

I have setup a lab environment with below configuration:

Lab firewall sending log -> (Filebeat -> Logstash -> Elasticsearch) which Fliebeat, logstash, Elasticsearch is on Alibaba Cloud.

When I try to use the SIEM module, it say I need to add data first. But there are logs sending into ELK. May I know anyway that I can bypass that?

Is logstash creating logstash-* or filebeat-* indices in your setup?

No. I changed the index to "lab-test-%{+YYYY.MM.dd}" in the logstash config file.

Okay..so what you need to do is adding this index to the security indices in Kibana advanced settings.
The SIEM App is not looking into custom indices by default.

Thanks for your suggestion and now I am able to get in the SIEM. But, unfortunately, I got another error.

[illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory.

It looks like your source.ip field is not configured as IP.

You may need to adapt your mapping...

BTW: If you don't have any specific reason to use logstash in this case, it would be much easier for you to send the filebeat data directly to Elasticsearch.

Hi @new2_elk,

To expand a bit on what @Felix_Roessel mentioned,

You may need to adapt your mapping...

I want to ensure that you are aware of Elastic Common Schema (ECS).

The Elastic SIEM/Security app, including its detection rules, signals, and detection alerts, requires your data to be indexed in an ECS-compliant format. ECS is an open source, community-developed schema that specifies field names and Elasticsearch data types for each field, and provides descriptions and example usage.

As @Felix_Roessel says, the easiest way to get your data in ECS-compliant format is to use an Elastic-supplied beat module, (e.g., filebeat or Elastic Agent integration), which will ingest and index your data in an ECS-compliant format. Elastic provides a growing list of these integrations that you can find on our Integrations page.

If you're using a custom data ingestion method (like your use of Logstash), or one provided by a third-party, then you may need to convert your data so that it is in an ECS-compliant format before you can use the SIEM/security app. This can be done by creating your own beat/module, or your own Logstash configuration for each data source, which will convert your data to ECS during the ingestion process.

General guidelines for creating ECS-compliant data:

  1. Each indexed document (e.g., your log, event, etc.) MUST have the @timestamp field.
  2. Your index mapping template must specify the Elasticsearch field data type for each field as defined by ECS. For example, your @timestamp field must be of the date field data type, etc.. This ensures that there will not be any mapping conflicts in your indices.
  3. The original fields from your log/event SHOULD be copied/renamed/converted to the corresponding ECS-defined field name and data type.
  4. Additional ECS fields, such as the ECS Categorization fields SHOULD be populated for each log/event, to allow proper inclusion of your data into dashboards and detection rules.

Here's a simple graphic that I created to help get this point across.

Can you share where your log data is coming from? (e.g., some security device? or some host computer?)

1 Like

The data is coming from a PA firewall that sending syslog to filebeat, then filbeat pass it to logstash. But may I know how to change the field type properly?

Thanks for the reply @new2_elk,

Question: are you using the Elastic Filebeat Palo Alto Networks Module?

It currently supports the traffic and threat logs from PA devices, and will perform a mapping to ECS for you.

For those PA log types, this will be the easiest way to get the data converted to ECS.

hi Mike,

yes. I am already using the panw module but the problem still occurring. Any information I can provide for further discussion?

Hi @new2_elk,

The Elastic Filebeat Palo Alto Networks Module includes an Elasticsearch index template that properly maps the PANW fields to the proper ECS fields and data types as they are indexed in Elasticsearch.

If your Filebeat system is directly connected to Elasticsearch, this index template will be installed in Elasticsearch when you enable the filebeat panw module and run the filebeat setup command.

However, when using Filebeat with Logstash, you need to manually load the index template into Elasticsearch. Please find instructions on how to do that here.

To ensure that the Security App will look at the newest data, you may want to remove your existing filebeat-* indices before manually loading the index template. If you're OK with deleting the panw logs collected so far, then instructions on how to do this are here.

Once you've loaded the index template into Elasticsearch, and restarted sending the PANW logs to Filebeat, you should no longer get mapping errors, and you should be able to work with the PANW events in the security solution.

Please let us know if this works!