Calculate the processing time of algorithm

Hello everyone,

I'm new at Elasticsearch and I'm currently on the 6.2.4 version, but we will soon migrate to 7.3.2.

For some context : I'm monitoring a system's production and performances using ES and Kibana. The system is made of multiple algorithm, each with its own name, running at the same time. Chunks of data are sent to each algorithm for processing. A log is generated for each chunk once it has been processed.

What I would like to do is a bar chart stating the processing time of each algorithm, as shown in my drawing below. Titi, tata, tutu... are the name of the algorithm.
barChart

For this I need to calculate the processing time of each algorithm. This means :

  1. Find the first occurence of a log message about a chunk being processed,
  2. Find the last occurence of a log message about a chunk being processed,
  3. Calculate and store the difference of their timestamp,
  4. Associate the algorithm name and the processing time.

I tried using the console for scripting but never managed to make it work. And I don't even know if this is the right approach. If you have an idea on how I can deal with any of those steps please help.

And if you need any more information, like a log extract, don't hesitate to ask.

This is pretty readily achievable.

Suppose I set up my index like this:

PUT algorithms
{
  "settings": {
    "number_of_replicas": 0,
    "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "name": {
        "type": "keyword"
      },
      "time": {
        "type": "date"
      }
    }
  }
}

and throw events in for the algorithms

POST _bulk
{ "index" : { "_index" : "algorithms"} }
{ "name" : "toto", "time": "2019-11-13T22:15:30Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "titi", "time": "2019-11-13T22:15:32Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "tutu", "time": "2019-11-13T22:15:34Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "tata", "time": "2019-11-13T22:15:36Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "toto", "time": "2019-11-13T22:15:37Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "titi", "time": "2019-11-13T22:15:38Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "tutu", "time": "2019-11-13T22:15:39Z" }
{ "index" : { "_index" : "algorithms"} }
{ "name" : "tata", "time": "2019-11-13T22:15:43Z" }

Yes, I'm only putting in two events per algo, but it wouldn't matter how many additional events were in there, this will work.

I'm going to use aggregations here. First, I'm going to use a terms aggregation on the algorithm name to silo each algorithm's events into a bucket.

Then, within each of those buckets, I'm going to do a stats aggregation, part of which is min and max (and a string representation, too).

POST algorithms/_search
{
  "size": 0, 
  "aggs": {
    "algo": {
      "terms": {
        "field": "name",
        "size": 10
      },
      "aggs": {
        "stats": {
          "stats": {
            "field": "time"
          }
        }
      }
    }
  }
}

(You can see there is no query in the body; it is implicitly match_all, and I've provided "size": 0 because I'm not interested in having "hits" - the documents that result from the search. I only want "aggs".)

The results:

{
  "took" : 5,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 8,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "algo" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "tata",
          "doc_count" : 2,
          "stats" : {
            "count" : 2,
            "min" : 1.573683336E12,
            "max" : 1.573683343E12,
            "avg" : 1.5736833395E12,
            "sum" : 3.147366679E12,
            "min_as_string" : "2019-11-13T22:15:36.000Z",
            "max_as_string" : "2019-11-13T22:15:43.000Z",
            "avg_as_string" : "2019-11-13T22:15:39.500Z",
            "sum_as_string" : "2069-09-25T20:31:19.000Z"
          }
        },
        {
          "key" : "titi",
          "doc_count" : 2,
          "stats" : {
            "count" : 2,
            "min" : 1.573683332E12,
            "max" : 1.573683338E12,
            "avg" : 1.573683335E12,
            "sum" : 3.14736667E12,
            "min_as_string" : "2019-11-13T22:15:32.000Z",
            "max_as_string" : "2019-11-13T22:15:38.000Z",
            "avg_as_string" : "2019-11-13T22:15:35.000Z",
            "sum_as_string" : "2069-09-25T20:31:10.000Z"
          }
        },
        {
          "key" : "toto",
          "doc_count" : 2,
          "stats" : {
            "count" : 2,
            "min" : 1.57368333E12,
            "max" : 1.573683337E12,
            "avg" : 1.5736833335E12,
            "sum" : 3.147366667E12,
            "min_as_string" : "2019-11-13T22:15:30.000Z",
            "max_as_string" : "2019-11-13T22:15:37.000Z",
            "avg_as_string" : "2019-11-13T22:15:33.500Z",
            "sum_as_string" : "2069-09-25T20:31:07.000Z"
          }
        },
        {
          "key" : "tutu",
          "doc_count" : 2,
          "stats" : {
            "count" : 2,
            "min" : 1.573683334E12,
            "max" : 1.573683339E12,
            "avg" : 1.5736833365E12,
            "sum" : 3.147366673E12,
            "min_as_string" : "2019-11-13T22:15:34.000Z",
            "max_as_string" : "2019-11-13T22:15:39.000Z",
            "avg_as_string" : "2019-11-13T22:15:36.500Z",
            "sum_as_string" : "2069-09-25T20:31:13.000Z"
          }
        }
      ]
    }
  }
}

I wonder how you will isolate the results from multiple runs. Maybe you have a unique process ID for each run, or maybe you have a time window you can rely on to contain all of the events from one run and no events from any other run, either of which are criteria you would add to a query to limit the documents used in the aggregations.

1 Like

Thank you very much for this detailed answer. It gave me the baseline to obtain what I wished for. I'm gonna explain how I completed it, maybe it will be useful to someone someday.

First I adapted your aggregation query to obtain the minimum and maximum time of each of my algorithm, as shown below.

GET event-2019.10.17.00/_search
{
  "query": {
    "bool": {
      "must": {
        "exists" : { "field" : "apeName" }
      },
      "filter": {
        "term": {
          "message.keyword": "CHUNK processed OK"
        }
      }
    }
  },
  "size": 0,
  "aggs": {
    "APE": {
      "terms": {
        "field": "apeName.keyword",
        "size": 11
      },
      "aggs": {
        "time": {
          "stats": {
            "field": "timestamp"
          }
        }
      }
    }
  }
}

I then created a new index on the model that you provided.

PUT algorithmprocessingtime
{
  "settings": {
    "number_of_replicas": 0,
    "number_of_shards": 1
  },
  "mappings": {
    "doc": {
      "properties": {
        "name": { "type": "keyword" },
        "startTime" : { "type" : "date" },
        "endTime" : { "type" : "date" },
      } 
    }
  }
}

Using the _bulk API I added the min and max value of each algorithm to their specific field. The construction of the request was done manually for each algorithm (11 of them), I will have to find a way to automation this whole process.

POST _bulk
{ "index" : { "_index" : "algorithmprocessingtime", "_type" : "doc" } }
{ "name" : "IDAC", "startTime" : "2019-10-17T00:40:31.176Z", "endTime" : "2019-10-17T00:48:12.361Z" }
{ "index" : { "_index" : "algorithmprocessingtime", "_type" : "doc" } }
{ "name" : "TSIT", "startTime" : "2019-10-17T00:40:31.170Z", "endTime" : "2019-10-17T00:48:12.707Z" }

Lastly made a scripted field that stores the difference between the end time and the start time :

GET algorithmprocessingtime/_search
{
  "script_fields": {
    "processingTime": {
      "script": {
        "lang": "painless",
        "source": "doc['endTime'].date.millisOfDay - doc['startTime'].date.millisOfDay"
      }
    }
  }
}

And finally, here it is :

As for isolating the results from multiple runs, I do have a field with a unique ID allowing me to bucket each run independently.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.