Creating job: anomaly in the event rate of beats


I would like to create a job to detect when there is anomaly in the number of event send by beats, mostly to detect if one of the beat stop sending logs to my ELK cluster (and to detect the name of the machine where the beat stopped working).

so for each beat I created a multy metric job, for the field low count(Event rate) and split field by

so the job for each beat looks like that:

I would like to know if it's the best way to do that, or there is another way to do it without spliting by or by creating one job for all the beats instead of one job for each beat

Best regards

Seems like the best (and most efficient) way would be to create an Advanced job where:

  1. query the index pattern where all of the beats publish the data (i.e. filebeat-*)
  2. include a terms aggregation (sized appropriately to return data for all agents) which aggregates the counts for every and have that be the datafeed (follow example here:
  3. do low count(Event rate) and split field by making sure that you use the name of the terms aggregation as the split field and you use "summary_count_field_name": "doc_count" because you're having the ML job process the output of aggregations
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.