ML - Not picking up anomalies?

machine-learning

(Richard Morwood) #1

I am investigating X-Pack's Machine Learning as a way to tell me when customer orders are declining / stopped. I've got from one client I know we lost and am seeing if X-Pack could have alerted us on their changed behavior.

The time range is over a 7 month period, Jan - July bucketed daily.
You can see the Mon-Sun cycle, causing 0's throughout the year. I have setup a single metric job and it tells me of a few anomalies through the year. I understand the empty large range at the start of the year, so that part is OK. From mid-Feb to June it also looks ok. At first I thought "hooray, it deals with weekends nicely." Unfortunately it misses the clients order drop until far too late.
Is there a different way I should structure my data so that it picks up the client order drop far earlier?

Focusing on July, when mousing over each point it tells me the buckets actual value as well as the Upper and Lower bounds. Is there an API call I can use to get these and perform my own analysis on? I've looked through the ML documentation but can't see the upper/lower bounds being returned in any calls.
image


(Harvey Maddocks) #2

Hi,

What detector exactly are you using?

In general an anomaly can be detected earlier if you are using a short bucket span, if your data is pre-bucketed daily then maybe you can change that so you can use a finer bucketing?

All the results are stored in a .ml-anomalies-* index, this will contain the model bounds and other information.


(rich collier) #3

I'm curious to know the choice of detector function as well.

Also, keep in mind that 7 months of daily data really isn't very much data to build a great model (That's only like 210 observations in total!). Ideally, if you can model your data hourly, you'll probably get better results. Is that possible? Do you have data that granular?


(Richard Morwood) #4

Thanks for the advice, I can now get data out of the ml-anomalies-(myjob) index.

The data I'm using is orders placed by a client. The graph is looking at the mean of sales.
While the raw data is available down to the second, I chose to look at daily as there is no hourly pattern. The client might place 10 orders in one hour in an afternoon, then spread them out throughout the day next. Looking at daily totals gets around that a bit. Here's the chart looking at hourly using the same metric

No idea why the expected bands are so much higher, guessing this is because there's not that many buckets with data. Still the 0 anomalies don't get picked up until mid-July.


(rich collier) #5

Hi Richard,

Can you please show an example raw data record and the specific job configuration:

 curl -u elastic:changeme -XGET 'localhost:9200/_xpack/ml/anomaly_detectors/<job_id>?pretty'

That would be helpful!


(rich collier) #6

Hi Richard,

Are you able to share the information I asked for above? Are you still seeing the issue? Do you see the issue when the timeframe is zoomed way in - i.e. for a day for example?


(Richard Morwood) #7

Hi Rich
Sorry for the delay, been out for a few days.

The issue is still there when zoomed right in. Zooming has no effect on the anomaly points showing on the charts.

Here's the job configuration

{
"count": 1,
"jobs": [
{
"job_id": "aaabcclientmeanmetric1",
"job_type": "anomaly_detector",
"job_version": "5.5.1",
"create_time": 1503446832501,
"finished_time": 1503446834248,
"analysis_config": {
"bucket_span": "1d",
"summary_count_field_name": "doc_count",
"detectors": [
{
"detector_description": "mean(metric_1)",
"function": "mean",
"field_name": "metric_2",
"detector_rules": [],
"detector_index": 0
}
],
"influencers": []
},
"data_description": {
"time_field": "date_ordered",
"time_format": "epoch_ms"
},
"model_plot_config": {
"enabled": true
},
"model_snapshot_retention_days": 1,
"model_snapshot_id": "1503446833",
"results_index_name": "shared"
}
]
}

Here's a raw data row example obtained through GET orders-abcclientmeanmetric1/_search

  {
    "_index": "orders-aaabcclientmeanmetric1",
    "_type": "forlearning",
    "_id": "33557648",
    "_score": 1,
    "_source": {
      "dimension_1": "33557648",
      "dimension_2": "63276",
      "dimension_3": "9334",
      "date_ordered": 1487116800000,
      "metric_1": "21",
      "metric_2": "74",
      "dimension_4": "NSW",
      "dimension_5": "NSW Property Enquiries",
      "dimension_6": "LEAP",
      "dimension_7": "abcclient",
      "dimension_8": "11444"
    }
  },

This is the response from the job, looking at the datapoint for April 28. This point is not out of the models prediction zone by much, I was trying to find a non-zero example.

  {
    "_index": ".ml-anomalies-shared",
    "_type": "doc",
    "_id": "aaabcclientmeanmetric1_bucket_1493337600000_86400",
    "_score": 1,
    "_source": {
      "job_id": "aaabcclientmeanmetric1",
      "timestamp": 1493337600000,
      "anomaly_score": 0,
      "bucket_span": 86400,
      "initial_anomaly_score": 0,
      "event_count": 22,
      "is_interim": false,
      "bucket_influencers": [],
      "processing_time_ms": 0,
      "result_type": "bucket"
    }
  },
  {
    "_index": ".ml-anomalies-shared",
    "_type": "doc",
    "_id": "aaabcclientmeanmetric1_model_plot_1493337600000_86400_'arithmetic mean value by person'_29791_0",
    "_score": 1,
    "_source": {
      "job_id": "aaabcclientmeanmetric1",
      "result_type": "model_plot",
      "bucket_span": 86400,
      "timestamp": 1493337600000,
      "model_feature": "'arithmetic mean value by person'",
      "model_lower": 7.30419,
      "model_upper": 12.8753,
      "model_median": 9.9138,
      "actual": 13.9091
    }
  },

Anything else I can get to assist, please let me know


(rich collier) #8

Hello Richard - at first blush, things look okay in your configuration. I'd love to see the full data set firsthand to understand this better. I will reach out to you on a direct message to see if sharing that data with me is possible.


(Tom Veasey) #9

Hi Richard. First of all, thanks very much for sending us the data. I discussed this a bit with Rich, but I thought I'd reply here because there are some useful points that this question raises.

The most important relates to how our different functions treat missing values, i.e. buckets containing no documents. Basically, all our functions, with the exception of count and sum, will entirely ignore empty buckets. Count and sum treat these buckets as having a value of zero. This is the reason that we don't detect the anomalies corresponding to the drops you've marked.

The unusual looking chart when you use hourly buckets relates to the fact that nearly all the buckets are in fact empty. The time stamp of the documents mean that they all land in one hour on each day. In this chart the bounds are correct, but the actuals are misleading. These come from searching our .ml-anomalies index and the actuals we write here are zero for an empty bucket irrespective of the function (this is incorrect behaviour IMO and I've raised a ticket internally). When viewing zoomed out over a long time range the chart then aggregates the hour actual values to some longer span, based on the number of points it's prepared to display, using in this case the mean. The zeros then pull the actual values down. Note that if your data are polled at some interval there is no point in having a bucket length shorter than this (and it is usually a bad idea for both sum and count).

Ideally, I'd just suggest using sum(metric_2) with 1 day bucket; however, this threw up an issue with the trend modelling we could potentially improve: unlike the mean, the sum shows much higher variation and aside from weekday/end pattern there isn't really a periodic pattern (the signal autocorrelation is only 0.2). Finding the weekday anomalies relies on partitioning the weekend and weekdays so this needs a modelling change to address, which we'll look to make in an upcoming release.

There is also a way to get this to work by manipulating the input data slightly. I'm not sure if this is easy/feasible for you to do. I added in an extra document with an explicit zero for metric_2 for all weekdays which are missing documents and then ran low_non_null_sum. (This effectively ignores weekends.) The results are below:


(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.