They already have all the services (filebeat, packetbeat) running but they are not indexed by Elastic, so my guess is that I need to index them but I can't find how to do it.
Having a cluster with all my hosts monitored by Stormshield would also be really good.
for example, I would like the ip 192.168.1.253 to be part of the hosts.
There's a couple things needed to appear in the siem. First the index that the logs are in must be included in the siem config. To check that, go to stack management and kibana advanced config and search for siem. Then I believe the other thing is each document must contain the event.module and event.dataset fields.
The Elastic SIEM/Security app, including its detection rules, signals, and detection alerts, requires your data to be indexed in an ECS-compliant format. ECS is an open source, community-developed schema that specifies field names and Elasticsearch data types for each field, and provides descriptions and example usage.
The easiest way to get your data in ECS-compliant format is to use an Elastic-supplied beat module, (e.g., filebeat or Elastic Agent integration), which will ingest and index your data in an ECS-compliant format. Elastic provides a growing list of these integrations that you can find on our Integrations page.
We don't seem to have an integration with Stormshield so you may need to convert your data so that it is in an ECS-compliant format before you can use the SIEM/security app. This can be done by creating your own beat, Logstash, or Elasticsearch ingest node pipeline, which will convert your data to ECS during the ingestion process. You can check out this experimental tool, called ECS Mapper to help you create these.
Each indexed document (e.g., your log, event, etc.) MUST have the @timestamp field.
Your index mapping template must specify the Elasticsearch field data type for each field as defined by ECS. For example, your @timestamp field must be of the date field data type, etc.. This ensures that there will not be any mapping conflicts in your indices.
The original fields from your log/event SHOULD be copied/renamed/converted to the corresponding ECS-defined field name and data type.
Additional ECS fields, such as the ECS Categorization fields SHOULD be populated for each log/event, to allow proper inclusion of your data into dashboards and detection rules.
Here's a simple graphic that I created to help get this point across.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.