We have a scenario whereby we have a number (hundreds) of remote servers on customer sites. Each server periodically (once an hour) runs a speed test and feeds these results back to Elasticsearch via Filebeat. Each result is stored as a number (float) in either a download or upload field. This is all showing in discover as we would expect.
I was hoping to use ML anomaly detection to keep track of these results for each server and report on, well, any anomalies.
I feel like this is entirely possible, but the question is what is the best approach? Can we have one job that will perform this for every server, perhaps based on hostname as a differentiator? Or would we require a separate job for each server? The former is obviously much more attractive, just not sure it's doable?
Any advice would be appreciated.