Bulk Indexing of signals failed: Could not dynamically add mapping for field [source.ip.keyword]. Existing mapping for [source.ip] must be of type object but found [ip]. name: "Prova Nmap detection" id: "8c5e9c50-522d-11ed-895f-390c31d94eb0" rule id: "1fd6fcc8-e7aa-44c9-be7f-5bd699014236" signals index: ".siem-signals-default"
It looks like the issue is caused by incorrect mappings of your source events.
Indices that contain source events for a given rule are determined by the Index patterns field. In your case, mappings of these indices seem to contain a source.ip.keyword field (which I'd assume has a keyword type) while Elastic Security expects to see there a standard ECS field source.ip of type ip.
What happens when this rule executes, is the rule copies many fields like source.* from a source document to the alert that is generated based on this source document, and then tries to index the alert into a separate .alerts-security.alerts-<space-id> index. The alerts index has its own strict mappings where it is expected that source.ip is a field of type ip according to ECS.
I'm running the app locally for educational purposes, I'm a student. As you may have guessed, I would like to generate an alarm when many packets from the same source ip address are detected. So I necessarily need the group by clause with source.ip.
Thank you very much for the time you are dedicating to me
I would be very grateful if you could help me because I don't have much time (I have an exam on Wednesday).
I await your kind reply, thank you very much.
Hey @Simone_Calo, ultimately, you need to fix the incorrect mappings in your packetbeat-* indices. Since you're running the app locally for educational purposes, the easiest way would probably be to erase all your Elasticsearch data and start from scratch.
Export the rules you need (there's a bulk action for it in the Rules table).
Stop packetbeat and any other beats you have.
Stop your local Kibana and Elasticsearch instances.
Execute packetbeat setup -e (docs) - this will make sure to correctly set mappings for the packetbeat-* indices, as well as do other preparatory things.
Start packetbeat.
Import the previously exported rules.
It's important to run ...beat setup -e for any beat before running it. Running it without this command leads to ES creating bad index mappings which was probably the reason for your issue.
I tried but unfortunately nothing has changed and I have the same error (among other things I already ran the suggested command every time when starting the packetbeat container)
Have you tried the same steps with a fresh installation of Elastic Stack on the host OS?
Containerized setup adds another dimension of complexity - e.g. you should know where data is persisted, in what order containers start, how error handling is done when not all the containers are available, etc.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.