I would like to create a job to detect when there is anomaly in the number of event send by beats, mostly to detect if one of the beat stop sending logs to my ELK cluster (and to detect the name of the machine where the beat stopped working).
so for each beat I created a multy metric job, for the field low count(Event rate) and split field by agent.name
I would like to know if it's the best way to do that, or there is another way to do it without spliting by user.name or by creating one job for all the beats instead of one job for each beat
do low count(Event rate) and split field by agent.name making sure that you use the name of the terms aggregation as the split field and you use "summary_count_field_name": "doc_count" because you're having the ML job process the output of aggregations
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.