Hello @RdrgPorto, @andrewkroh @legoguy1000,
Hopefully I can shed some light on some of these questions, please let me know if there is something I didn't cover.
-
For the question around enabling/disabling, it seems to have been covered already by Andrew, and its something that is of course up for discussion, following that issue seems to be the best approach.
-
The GeoIP processor is absolutely a valid point for the threatintel.indicator field, I will bring this up and see if we want to enable this as well.
-
Changing the logged message from error to debug has been implemented in 7.12.1+7.13 I believe, I will check if MISP is using that new option as well, it should now only be logged if debug is enabled. The reason is that there will always be an empty response for modules that uses pagination, once it reaches the last page, so leaving it at warning level is still too high, as we only want to show this during debugging.
-
The duplication between the old and new misp module is a bit different, and the new MISP fileset which is part of the threatintel module should be overwriting the updates because of the op_type that is set, it also uses a different API than the old MISP module, as the old uses attribute/restSearch, while the new uses event/restSearch, the format of which the events come back is a bit different.
When we receive any of the raw events, they all look like this: beats/misp_sample.ndjson.log at master · elastic/beats · GitHub
All attributes related to an Event will be split up into its own document, each Event has its own uuid, while each attribute also has its own uuid.
The processor that is used by the module, documented here: beats/config.yml at master · elastic/beats · GitHub
Will first set the document_id to the uuid of the attribute:
- decode_json_fields:
fields: [message]
document_id: Event.Attribute.uuid
target: json
After that, to allow documents to be overwritten, so that instead of duplicates, the old document is replaced by a new one, we overwrite the default op_type field of the metadata:
- script:
lang: javascript
id: my_filter
source: >
function process(event) {
event.Put("@metadata.op_type", "index");
}
The intended behavior here is that duplicated events should be overwriting the existing one, instead of creating a second event.
Once filebeat has received the response from the API, it will take the timestamp from the newest event it receieved, and use that as a filter for the API on any new API calls after that, so that we do not end up receiving the same events multiple times, causing constant overwrites for example.
This is done by setting cursor.timestamp to the request body, and if this is the first time filebeat starts up and cursor.timestamp do not exist it will look back the configured value on var.first_interval
in your configuration file.
- set:
target: body.timestamp
value: '[[.cursor.timestamp]]'
default: '[[ formatDate (now (parseDuration "-{{ .first_interval }}")) "UnixDate" ]]'
Cursor.timestamp is magically set based on the timestamp from the last event like so:
cursor:
timestamp:
value: '[[.last_event.Event.timestamp]]'
Hopefully this clear things up And thanks for all the feedback, keep it coming because we are always trying to make the modules better!