Can you set partition field and count by as the same field?

I have been trying to create a machine learning job that detects that a certain job that has been running for an anomalous amount of time. Sometimes this job runs for much longer than expected and sometimes much shorter. Since I don't have a duration field to work with, I'm just relying on high and low doc counts as those are usually indicative of how long the job is running for.

My question is about whether or not I can set my partition field and the count by field as the same field. I want the job to run independently across each value of this field, but this is also field that has the doc counts I want to analyze.

When I set my multi bucket ML job this way, it says that it cannot parse the data. However, I can't think of what other field to set for it to count on.



there is no count by field in the ML jobs.

I assume you mean by_field_name which is a way to split the analysis (like a "for each"). It is similar to partition_field_name but has subtle differences.

sounds like all you need is a count function and define a partition_field_name and that's all!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.