Create Watcher for percentile difference of a field between two days

Hi,

I would like to get an idea how I can create a watcher which is triggered based on percentile difference of a numeric field across two days.
Here's the scenario:

  1. Two float type fields, let's say, A and B
  2. Time to watched - previous day and today
  3. Watcher to be executed only once a day (possibly at end of the day)
  4. Calculate diff between values of A and B from today and the previous day
  5. Calculate the percentage of the difference between values (increase/decrease)
  6. Generate an alert if % of diff > 10.

The basic idea I've reached so far is to have 4 aggregations:

  1. 1 for A, a script calculating the value for the previous day
  2. 1 for A, a script for calculating today's value.
  3. 1 for B, a script calculating the value for the previous day
  4. 1 for B, a script for calculating today's value.

For conditions, I am not sure if this will require two conditions, one for A and another for B.

Also, for transform, I am not sure how I'd get these % values.

P.S.: I've gone through elastic watcher examples. That is where I figured out some of the above-mentioned details. But I am not clearly able to visualize the basic structure for this use case.

Hey,

I think you are almost there regarding the structure. First thing is to get your query right, before doing anything else. If you are able to retrieve the data you need, everything else (like the condition) is going to be easy.

You could have a query that

  • filters on the last two days
  • uses a filters aggregation to have one bucket for a percentile agg from today and one bucket for a percentile from yesterday

and then compare the two values in the condition and check if their difference is more than 10%.

Does that make sense to you?

Hi,

Yes, that makes sense. I'll start with getting queries return intended data and then proceed further with conditions.

I'll update here in case something goes sideways.

Thanks
Shantanu

Hi, @spinscale, while I was sketching out the watcher, I came across the error:

failed to execute [search] input for watch [_inlined_], reason [[range] unknown field [date_range], parser not found]

Here's my watcher:

{
  "metadata":{
    "threshold":5,
    "interval":"2m",
    "window":"2d"
  },
  "trigger":{
    "schedule":{
      "interval":"2m"
    }
  },
  "input":{
    "search":{
      "request":{
        "indices":[
          "filebeat-*"
        ],
        "types":[
          "doc"
        ],
        "body": {
          "aggs": {
            "aggs1": {
              "range": {
                "date_range": {
                  "ranges": [
                    {
                      "from": "now-2d/d"
                    },
                    {
                      "to": "now-2d/d"
                    }
                  ]
                },
                "aggs": {
                  "max": {
                    "script": {
                      "source": "(doc['upstream'].value\/100)"
                    }
                  }
                }
              }
            },
            "aggs2": {
              "date_range": {
                "ranges": [
                  {
                    "from": "now-2d/d"
                  },
                  {
                    "to": "now-2d/d"
                  }
                ]
              },
              "aggs": {
                "max": {
                  "script": {
                    "source": "(doc['downstream'].value\/100)"
                  }
                }
              }
            },
            "aggs3": {
              "date_range": {
                "ranges": [
                  {
                    "from": "now-1d/d"
                  },
                  {
                    "to": "now/d"
                  }
                ]
              },
              "aggs": {
                "max": {
                  "script": {
                    "source": "(doc['upstream'].value\/100)"
                  }
                }
              }
            },
            "aggs4": {
              "date_range": {
                "ranges": [
                  {
                    "from": "now-1d/d"
                  },
                  {
                    "to": "now/d"
                  }
                ]
              },
              "aggs": {
                "max": {
                  "script": {
                    "source": "(doc['downstream'].value\/100)"
                  }
                }
              }
            }
          },
          "query": {
            "bool": {
              "filter": {
                "range": {
                  "@timestamp": {
                    "lte": "now",
                    "gte": "now-{{ctx.metadata.window}}"
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

This does not include transform, condition, and action. I am just checking to get the correct data first.

date_range is an arbitrary named identifier, but the name of the aggregation is range, which you may be missing in one of your aggs.

I think there's something else which is not correct about "body": {. Even after changing date_range to range, I am getting:

failed to execute [search] input for watch [_inlined_], reason [[range] unknown field [aggs], parser not found]. I am taking reference from example elastic watchers for this.

I do not think that this is a watcher issue, but a search query issue. Have you tried putting that search request into a regular search request and made sure that does not fail?

Also, you have not pasted your changed search request, so it is super hard to help now, except that you need to add the range part to all aggregations.

Yes, I was missing range in some aggs. But now I've added it. Here's the watcher:

{
  "metadata":{
    "threshold":5,
    "interval":"2m",
    "window":"2d"
  },
  "trigger":{
    "schedule":{
      "interval":"2m"
    }
  },
  "input":{
    "search":{
      "request":{
        "indices":[
          "filebeat-*"
        ],
        "types":[
          "doc"
        ],
        "body": {
          "aggs": {
            "aggs1": {
              "range": {
                "date_range": {
                  "ranges": [
                    {
                      "from": "now-2d/d"
                    },
                    {
                      "to": "now-2d/d"
                    }
                  ]
                },
                "aggs": {
                  "max": {
                    "script": {
                      "source": "(doc['upstream'].value\/100)"
                    }
                  }
                }
              }
            },
            "aggs2": {
              "range": {    
                "date_range": {
                  "ranges": [
                    {
                      "from": "now-2d/d"
                    },
                    {
                      "to": "now-2d/d"
                    }
                  ]
                }  
              },
              "aggs": {
                "max": {
                  "script": {
                    "source": "(doc['downstream'].value\/100)"
                  }
                }
              }
            },
            "aggs3": {
              "range": {    
                "date_range": {
                  "ranges": [
                    {
                      "from": "now-1d/d"
                    },
                    {
                      "to": "now/d"
                    }
                  ]
                }  
              },
              "aggs": {
                "max": {
                  "script": {
                    "source": "(doc['upstream'].value\/100)"
                  }
                }
              }
            },
            "aggs4": {
              "range": {    
                "date_range": {
                  "ranges": [
                    {
                      "from": "now-1d/d"
                    },
                    {
                      "to": "now/d"
                    }
                  ]
                }   
              },
              "aggs": {
                "max": {
                  "script": {
                    "source": "(doc['downstream'].value\/100)"
                  }
                }
              }
            }
          },
          "query": {
            "bool": {
              "filter": {
                "range": {
                  "@timestamp": {
                    "lte": "now",
                    "gte": "now-{{ctx.metadata.window}}"
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Confirmed in Search profiler, it is giving same error:

[range] unknown field [aggs], parser not found. The problem I am facing is that, the error is not giving a clue that what might be wrong.

this is indeed a suboptimal error message... You know have two identifiers aggsX and date_range in some aggs, that get confused. Is this the search request you are looking for?

GET _search
{
  "aggs": {
    "aggs1": {
      "range": {
        "ranges": [
          {
            "from": "now-2d/d"
          },
          {
            "to": "now-2d/d"
          }
        ]
      },
      "aggs": {
        "my_agg": {
          "max": {
            "script": {
              "source": "(doc['upstream'].value/100)"
            }
          }
        }
      }
    },
    "aggs2": {
      "range": {
        "ranges": [
          {
            "from": "now-2d/d"
          },
          {
            "to": "now-2d/d"
          }
        ]
      },
      "aggs": {
        "my_agg": {
          "max": {
            "script": {
              "source": "(doc['downstream'].value/100)"
            }
          }
        }
      }
    },
    "aggs3": {
      "range": {
        "ranges": [
          {
            "from": "now-1d/d"
          },
          {
            "to": "now/d"
          }
        ]
      },
      "aggs": {
        "my_agg": {
          "max": {
            "script": {
              "source": "(doc['upstream'].value/100)"
            }
          }
        }
      }
    },
    "aggs4": {
      "range": {
        "ranges": [
          {
            "from": "now-1d/d"
          },
          {
            "to": "now/d"
          }
        ]
      }
    },
    "aggs": {
      "max": {
        "script": {
          "source": "(doc['downstream'].value/100)"
        }
      }
    }
  },
  "query": {
    "bool": {
      "filter": {
        "range": {
          "@timestamp": {
            "lte": "now",
            "gte": "now-{{ctx.metadata.window}}"
          }
        }
      }
    }
  }
}

Well, probably yes. I checked this in dev console, but it is returning date format error:

{
  "error": {
    "root_cause": [
      {
        "type": "number_format_exception",
        "reason": "For input string: \"now-2d/d\""
      }
    ],
    "type": "search_phase_execution_exception",
    "reason": "all shards failed",
    "phase": "query",
    "grouped": true,
    "failed_shards": [
      {
        "shard": 0,
        "index": "filebeat-6.2.4-2018.05.15",
        "node": "GXT24JxFR2u09ohBu50mBg",
        "reason": {
          "type": "number_format_exception",
          "reason": "For input string: \"now-2d/d\""
        }
      }
    ]
  },
  "status": 400
}

Anything missing with date format?

Update:
I've got past this error by replacing range with date_range. But there seems to be IllegalStateException:

[2018-05-16T04:17:21,532][DEBUG][o.e.a.s.TransportSearchAction] [GXT24Jx] [filebeat-6.2.4-2018.04.29][4], node[GXT24JxFR2u09ohBu50mBg], [P], s[STARTED], a[id=QorPzoKqSJyo4EPom0wgKg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[filebeat-6.2.4-2018.05.09-index, filebeat-6.2.4-2018.04.29, filebeat-6.2.4-2018.05.08, filebeat-6.2.4-2018.05.12-index, filebeat-6.2.4-2018.05.11-index, filebeat-6.2.4-2018.05.10-index, filebeat-6.2.4-2018.04.30, filebeat-6.2.4-2018.05.05, filebeat-6.2.4-2018.05.16, filebeat-2017.09.24, filebeat-6.2.4-2018.04.25, filebeat-6.2.4-2018.05.04, filebeat-6.2.4-2018.05.15, filebeat-6.2.4-2018.04.28, filebeat-6.2.4-2018.05.07, filebeat-6.2.4-2018.05.06, filebeat-6.2.4-2018.05.01, filebeat-6.2.4-2018.05.12, filebeat-6.2.4-2018.04.24, filebeat-6.2.4-2018.05.03, filebeat-6.2.4-2018.05.14, filebeat-6.2.4-2018.04.23, filebeat-6.2.4-2018.05.02, filebeat-6.2.4-2018.05.13], indicesOptions=IndicesOptions[id=7, ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=false, ignore_aliases=false], types=[doc], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=5, batchedReduceSize=512, preFilterShardSize=128, source={"query":{"bool":{"filter":[{"range":{"@timestamp":{"from":"now-2d","to":"now","include_lower":true,"include_upper":true,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},"aggregations":{"aggs1":{"date_range":{"ranges":[{"from":"now-2d/d"},{"to":"now-1d/d"}],"keyed":false},"aggregations":{"my_agg":{"max":{"script":{"source":"(doc['upstream'].value/100)","lang":"painless"}}}}},"aggs2":{"date_range":{"ranges":[{"from":"now-2d/d"},{"to":"now-1d/d"}],"keyed":false},"aggregations":{"my_agg":{"max":{"script":{"source":"(doc['downstream'].value/100)","lang":"painless"}}}}},"aggs3":{"date_range":{"ranges":[{"from":"now-1d/d"},{"to":"now/d"}],"keyed":false},"aggregations":{"my_agg":{"max":{"script":{"source":"(doc['upstream'].value/100)","lang":"painless"}}}}},"aggs4":{"date_range":{"ranges":[{"from":"now-1d/d"},{"to":"now/d"}],"keyed":false}},"aggs":{"max":{"script":{"source":"(doc['downstream'].value/100)","lang":"painless"}}}}}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [GXT24Jx][10.30.197.90:9300][indices:data/read/search[phase/query]]
Caused by: java.lang.IllegalStateException: value source config is invalid; must have either a field context or a script or marked as unwrapped
        at org.elasticsearch.search.aggregations.support.ValuesSourceConfig.toValuesSource(ValuesSourceConfig.java:228) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:51) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:216) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:216) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:55) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:307) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:340) ~[elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:316) [elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:312) [elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.search.SearchService$3.doRun(SearchService.java:1002) [elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.2.4.jar:6.2.4]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

I used the wrong agg in the above example

GET foo/_search
{
  "aggs": {
    "aggs1": {
      "date_range": {
        "field" : "timestamp",
        "ranges": [
          {
            "from": "now-2d/d"
          },
          {
            "to": "now-2d/d"
          }
        ]
      },
      "aggs": {
        "my_agg": {
          "max": {
            "script": {
              "source": "(doc['upstream'].value/100)"
            }
          }
        }
      }
    },
    "aggs2": {
      "date_range": {
        "field" : "timestamp",
        "ranges": [
          {
            "from": "now-2d/d"
          },
          {
            "to": "now-2d/d"
          }
        ]
      },
      "aggs": {
        "my_agg": {
          "max": {
            "script": {
              "source": "(doc['downstream'].value/100)"
            }
          }
        }
      }
    },
    "aggs3": {
      "date_range": {
        "field" : "timestamp",
        "ranges": [
          {
            "from": "now-1d/d"
          },
          {
            "to": "now/d"
          }
        ]
      },
      "aggs": {
        "my_agg": {
          "max": {
            "script": {
              "source": "(doc['upstream'].value/100)"
            }
          }
        }
      }
    },
    "aggs4": {
      "date_range": {
        "field" : "timestamp",
        "ranges": [
          {
            "from": "now-1d/d"
          },
          {
            "to": "now/d"
          }
        ]
      }
    },
    "aggs": {
      "max": {
        "script": {
          "source": "(doc['downstream'].value/100)"
        }
      }
    }
  },
  "query": {
    "bool": {
      "filter": {
        "range": {
          "@timestamp": {
            "lte": "now+1y",
            "gte": "now-1y"
          }
        }
      }
    }
  }
}

Just a side note: It is pretty inefficient to use scripting like that, as every value needs . to be processed and divided by 100. If you did that on index using ingestion, this aggregation could be magnitudes faster.

Yes, adding another field like downstream.pct on the logstash side and then indexing it into ES makes sense here. Thanks for the suggestion.

@spinscale The search query is returning correct data. But I am getting a single value in response.

 "aggs": {
            "value": 334.510625
          }

I wanted to get 4 corresponding values for four aggs.

Update:

This was resolved. The closing brace of aggs4 was closed before its nested agg.
But why is it that doc_count is shown as zero in all aggs whereas data is returning correctly with valid values?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.