I have created a machine learning job to detect port scanner using the packetbeat-* index.
type of job: Population Population field: destination.ip metric: distinct count(destination port) influencers: destination.ip and source.ip bukcet span: 15min
it's working perfectly, but I am getting warning of hard limits AND soft_limit,
Job memory status changed to soft_limit; memory pruning will now be more aggressive
Job memory status changed to hard_limit; job exceeded model memory limit 23mb by 1.7mb. Adjust the analysis_limits.model_memory_limit setting to ensure all data is analyzed
I have a dedicated machine learninig node: 6 CPU and 8Go RAM
Could you please tell me how can I solve this warninigs !
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.