Threshold rule can't group by with source.ip but only with source.ip.keyword

My problem consists in defining a threshold rule in the group by field. I can only enter source.ip.keyword and not source.ip.

The consequence of this is that the rule fails every time with the following error:

Bulk Indexing of signals failed: Could not dynamically add mapping for field [source.ip.keyword]. Existing mapping for [source.ip] must be of type object but found [ip]. name: "Prova Nmap detection" id: "8c5e9c50-522d-11ed-895f-390c31d94eb0" rule id: "1fd6fcc8-e7aa-44c9-be7f-5bd699014236" signals index: ".siem-signals-default"

Please help me, I am a begginer student

Hi @Simone_Calo :wave: and welcome to the forum!

It looks like the issue is caused by incorrect mappings of your source events.

Indices that contain source events for a given rule are determined by the Index patterns field. In your case, mappings of these indices seem to contain a source.ip.keyword field (which I'd assume has a keyword type) while Elastic Security expects to see there a standard ECS field source.ip of type ip.

What happens when this rule executes, is the rule copies many fields like source.* from a source document to the alert that is generated based on this source document, and then tries to index the alert into a separate .alerts-security.alerts-<space-id> index. The alerts index has its own strict mappings where it is expected that source.ip is a field of type ip according to ECS.

Elastic Security requires source data to be ECS-compliant to work correctly: Elastic Security system requirements | Elastic Security Solution [8.5] | Elastic

Let me know if this helps!

okay, thanks a lot. So what should I do to solve the problem?

Maybe here instead of filebeat I have to insert packetbeat (the only component I'm using)?

So what should I do to solve the problem?

It depends. Are you running a production cluster or just learning/testing the app locally or in Cloud?

I'm running the app locally for educational purposes, I'm a student. As you may have guessed, I would like to generate an alarm when many packets from the same source ip address are detected. So I necessarily need the group by clause with source.ip.
Thank you very much for the time you are dedicating to me

I tried to display the field with the command and I got this output:

It can be useful?

Also in discover i can run this query and safely achieve this

I would be very grateful if you could help me because I don't have much time (I have an exam on Wednesday).
I await your kind reply, thank you very much.

Hey @Simone_Calo, ultimately, you need to fix the incorrect mappings in your packetbeat-* indices. Since you're running the app locally for educational purposes, the easiest way would probably be to erase all your Elasticsearch data and start from scratch.

  1. Export the rules you need (there's a bulk action for it in the Rules table).
  2. Stop packetbeat and any other beats you have.
  3. Stop your local Kibana and Elasticsearch instances.
  4. Delete the folder where Elasticsearch stores its data on the file system. The path depends on your OS and the way you installed ES, please refer to the docs to determine it Configuring Elasticsearch | Elasticsearch Guide [8.5] | Elastic
  5. Start Elasticsearch and Kibana.
  6. Execute packetbeat setup -e (docs) - this will make sure to correctly set mappings for the packetbeat-* indices, as well as do other preparatory things.
  7. Start packetbeat.
  8. Import the previously exported rules.

It's important to run ...beat setup -e for any beat before running it. Running it without this command leads to ES creating bad index mappings which was probably the reason for your issue.

Hope this helps. Best of luck with the exam!

thank you very much, I try as soon as possible as soon as it is done I update you, thank you very much

I tried but unfortunately nothing has changed and I have the same error (among other things I already ran the suggested command every time when starting the packetbeat container)

Have you tried the same steps with a fresh installation of Elastic Stack on the host OS?

Containerized setup adds another dimension of complexity - e.g. you should know where data is persisted, in what order containers start, how error handling is done when not all the containers are available, etc.