I'm currently working with Elastic 8.3 using an Agent (Fleet managed) with Netflow Integration.
My current goal is to create two ML Jobs for spikes on traffic between 2 IP's, but I can't figure out how to split/aggregate this data using the configurations on Kibana.
My best result so far was to create a Multi-Metric using non_null_sum("source.bytes") with Influencers being source.ip and destination.ip, but the results sometimes shows me more than one destination.ip or source.ip, depending on the time bucket I think:
Another tries of mine include use split metric, and in another one create an advanced with two detectors, but the validation said that source.ip and destination.ip wasn't aggregatable.
One last try of mine was to create a table visualization on Kibana, export the saved object aggregation and query, cleaned it up and used as base on the JSON at the start of advanced configuration, but after some reading I found out that only the query should be on this field.
Any thougts on how to aggregate/split the data on Job configuration to achieve the grouping between source and destination to then, do the baseline in ML?
This will create separate time series for each source.ip in your environment and generate anomalies when a large number of bytes are sent from a particular source.ip, compared to its baseline. The influencers should tell you the destination.ip they were sent to.
Hey Joshi, thx a lot for the input. I tried this way before and did now too, but in this case (I tried both Multi-metric and Advanced) I got the error "Detector field "source.ip" is not an aggregatable field."
I found it funny for even in the preview of the Multi-metric the partition is rightfully done:
Could you please verify the mapping for the source.ip field in the data view you're running the Anomaly Detection on?
It is possible that the source.ip field in your data is mapped as text or some other non-aggregatable type. Here's a link to another Discuss issue that talks about that specific error, and ways to tackle it.
I'm using ElasticAgent with Netflow Integration, the mappings are fixed AFAIK since Integration uses the model from template logs-netflow.log that imports logs-netflow.log@package.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.