Aggregation of single jobs in machine learning

I would like to aggregate or to consolidate different single jobs or multiple jobs in order to correlate different issues I could have for one application.
What I would like to do is:

  • One multiple jobs based on performance metrics for 5 servers of my application (Inside the index metricbeat)
  • One multiple jobs based on error in the logs for 5 servers of my applications (Inside the index filebeat)
  • One single job based on the usage of my applications (Inside the index filebeat)

The objective is to have alert just only if I have at least two issues (one related to usage and one related to one server capacity).
How can I consolidate different jobs to send notification just if I have different issues correlated?

Hi,

This is the sort of functionality we're looking to make much more accessible in a future release.

It is possible today if you're prepared to create some complicated watches, but in the future we'll add a wrapper so that you don't have to create the watch directly.

Here are some clues if you want to try and use watcher with ML in 5.4:

  • By default ML results are stored in an index called .ml-anomalies-shared, so unless you changed this then the results from all 3 of your jobs will be in there
  • The results have a field job_id which indicates which job they came from, plus a timestamp that indicates which bucketed time period they relate to
  • There are different types of results, indicated by the result_type field
  • If you want to alert when any two jobs have overall anomalies at the same time then you want to look for the anomaly_score field being greater than some threshold (say 75) when result_type is bucket, and alert when there are two results found with the same or similar timestamps

Basically it is possible, but unless you're a watcher expert you may prefer to wait until we add functionality to automate the watch setup.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.