Okay..so what you need to do is adding this index to the security indices in Kibana advanced settings.
The SIEM App is not looking into custom indices by default.
Thanks for your suggestion and now I am able to get in the SIEM. But, unfortunately, I got another error.
[illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory. (and) [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [source.ip] in order to load field data by uninverting the inverted index. Note that this can use significant memory.
It looks like your source.ip field is not configured as IP.
You may need to adapt your mapping...
BTW: If you don't have any specific reason to use logstash in this case, it would be much easier for you to send the filebeat data directly to Elasticsearch.
The Elastic SIEM/Security app, including its detection rules, signals, and detection alerts, requires your data to be indexed in an ECS-compliant format. ECS is an open source, community-developed schema that specifies field names and Elasticsearch data types for each field, and provides descriptions and example usage.
As @Felix_Roessel says, the easiest way to get your data in ECS-compliant format is to use an Elastic-supplied beat module, (e.g., filebeat or Elastic Agent integration), which will ingest and index your data in an ECS-compliant format. Elastic provides a growing list of these integrations that you can find on our Integrations page.
If you're using a custom data ingestion method (like your use of Logstash), or one provided by a third-party, then you may need to convert your data so that it is in an ECS-compliant format before you can use the SIEM/security app. This can be done by creating your own beat/module, or your own Logstash configuration for each data source, which will convert your data to ECS during the ingestion process.
Each indexed document (e.g., your log, event, etc.) MUST have the @timestamp field.
Your index mapping template must specify the Elasticsearch field data type for each field as defined by ECS. For example, your @timestamp field must be of the date field data type, etc.. This ensures that there will not be any mapping conflicts in your indices.
The original fields from your log/event SHOULD be copied/renamed/converted to the corresponding ECS-defined field name and data type.
Additional ECS fields, such as the ECS Categorization fields SHOULD be populated for each log/event, to allow proper inclusion of your data into dashboards and detection rules.
Here's a simple graphic that I created to help get this point across.
The data is coming from a PA firewall that sending syslog to filebeat, then filbeat pass it to logstash. But may I know how to change the field type properly?
The Elastic Filebeat Palo Alto Networks Module includes an Elasticsearch index template that properly maps the PANW fields to the proper ECS fields and data types as they are indexed in Elasticsearch.
If your Filebeat system is directly connected to Elasticsearch, this index template will be installed in Elasticsearch when you enable the filebeat panw module and run the filebeat setup command.
However, when using Filebeat with Logstash, you need to manually load the index template into Elasticsearch. Please find instructions on how to do that here.
To ensure that the Security App will look at the newest data, you may want to remove your existing filebeat-* indices before manually loading the index template. If you're OK with deleting the panw logs collected so far, then instructions on how to do this are here.
Once you've loaded the index template into Elasticsearch, and restarted sending the PANW logs to Filebeat, you should no longer get mapping errors, and you should be able to work with the PANW events in the security solution.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.