according to the documentation regarding machine learning jobs the machine learning score depends on the probability that is calculated for the specific metrics and there are 3 different scores (Bucket, Influencer and records scoring) But again how is the probability calculated? And how can I drill down to events to see what caused e.g. the score decrease? I would also like to know if there is a simple way to explain what is the difference among the 3 types of scoring, cause i find it a bit confusing.
This blog post give some more insights about what influences the record score computation. Bucket and Influence scores are the ways to aggregate record scores to provide a more general view. For instance, bucket scores would aggregate record scores from multiple anomaly detection metrics if you specified a multi-metric job.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.