Apply a simple math operation to count result

I have created a new visualization with the number of requests for an Apache log.
Y-Axis: Count
X-Axis: Date Histogram using timestamp field and interval set to Hourly.

This shows the total number of events of every hour, but we want to obtain the number of requests per second, so I need to divide the "count" by 3600.

I have tried using the "JSON Input", but it only applies to the "key" field.

Eg. without "JSON Input":

{
  "key_as_string": "2015-06-10T08:00:00.000Z",
  "key": 1433923200000,
  "doc_count": 2198
},
{
  "key_as_string": "2015-06-10T09:00:00.000Z",
  "key": 1433926800000,
  "doc_count": 2383
},

If I set something in "JSON Input" for the Y-Axis it is ignored (doesn't appear in the json request).
If I set {"script":"_value/3600"} in the "JSON Input" for the "X-Axis", it is added to the aggregation field of the request:

  "aggs": {
    "2": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "1h",
        ...
        "script": "_value/3600"

And the response:

{
  "key_as_string": "1970-01-05T14:00:00.000Z",
  "key": 396000000,
  "doc_count": 48531
}

396000000 is timestamp/3600.

Thanks

Just change your interval to 1s and that will give you agg secondly instead of 1h

"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"interval": "1s",

It's different to aggregate by second that aggregate by hour and calculate how many requests per seconds do you have.
Also, if you try to aggregate by second in a period of 24h, kibana (or ES) automatically changes to 10' (if I remember correctly)

I get you, so you want to know you have "average" 5 hits/s for the hour

I don't see any easy way of creating an average count. All the tests that I have seem like you can not access doc_count of the aggregation. Probably an order of operation which the script is kicked off before the aggregation.

Maybe we are thinking about this the wrong way.

How about using a logstash metrics to create the field you need?

https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html

input {
  generator {
    type => "generated"
  }
}
filter {
  if [type] == "generated" {
    metrics {
      meter => "events"
      add_tag => "metric"
    }
  }
}output {
  # only emit events with the 'metric' tag
  if "metric" in [tags] {
    stdout {
      codec => line {
        format => "rate: %{events.rate_1m}"
      }
    }
  }
}

Thanks for the answer, eperry. I think that it does not solve the problem if the users want to dynamically define their own metrics after the data has been indexed.

For example, some users want to know the requests per second, but then they might want to filter the error requests per second or the requests per second filtering by any other criteria in kibana. Defining the metrics before indexing your data doesn't solve it.

It is surprising, because I think these are a very common use cases. It should be simple to do it.

Data manipulation is always subject to limitation, That is why it is such a hot field and people are programming applications galore to process data in the way people want to see numbers represented.

It is difficult to tell users that it can't be done the way they want it. But providing a reasonable alternate is good too. Your new scenarios is easily accomplished as you don't want to calculate the results after an aggregation. HPS or EPS can easily be graphed. But to take 1 hour of an aggregation then apply a calculation is difficult. Kibana just does not do it yet. I am sure Elastic would be greatful for any code enhancements you provide to accomplish your goal.

:smile:

If you ever come up with a solution let me know, I would be interested in it implementation.

Thanks eperry. You are right. What do you think about adding a "scale" option? For example, in the metric visualization, something like this in the "options":

<div class="form-group">
<label>Scale - {{ vis.params.scale }}pt</label>
<input type="range" ng-model="vis.params.scale" class="form-control" min="1" max="3600" />
</div>

And render {{metric.value / vis.params.scale}} instead of just {{metric.value}}.

I'm new to kibana so I'm not familiarized with the code yet and I don't know if this is something that could be generalized the same way we have other options (font size, smooth lines, show legend, etc).

A better option would be to automatically scale any visualization by the number of seconds/minutes/etc it represents. For example, if I have an area chart with 2-hour columns with values [2000, 1000, 1000] and I select this new option "scale by [seconds]", those values would be automatically converted to [2000/23600, 1000/23600, 1000/2*3600]. Do you think this is a good idea?

Yah I am not sure, I am usually a backend programmer, It maybe one way of going about it, Probably the best solution would be to find in Kibina a spot between Receive of the data and Display which you could add "Javascript" manipulators

Then do something like this on the Search URL

HOST:bob metric:alpha.hits.sec | preset_javescript

Or even add the text field hold the advanced option.

but this is slightly outside my field of expertise