Increment date

How can you make it so that you can move in the time period. For example, there is a segment 00:00 - 00:15 and find a minimum in this segment. The next step is to take the segment 00:01 - 00:16, find the minimum in it, and so on. Those are to add 1 minute to gte and lte. Approximately in which direction to look?

I suppose one way to do it is using Pipeline aggregations. Date histogram aggregation for minute interval with Moving function aggregation will help you.

Thanks for your reply. Now how do you find the maximum value for each hour from this?

GET logstash-2021.12.2*/_search 
{ 
  "query": { 
    "bool": { 
      "filter": [ 
        { 
          "range": { 
            "@timestamp": { 
              "gte": "now-24h" 
            } 
          } 
        }, 
        { 
          "bool": { 
            "should": [ 
              { 
                "match_phrase": { 
                  "company": "BLAH-BLAH" 
                } 
              }
            ] 
          } 
        } 
      ] 
    } 
  },
  "size": 0,
  "aggs": {
    "myDatehistogram": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "1m",
        "offset": "+30s"
      }, "aggs": {
        "the_count": {
          "moving_fn": {
            "buckets_path": "_count",
            "window": 15,
            "script": "MovingFunctions.min(values)"
          }
        }
      }
    }
   }
}

response:

"aggregations" : {
    "myDatehistogram" : {
      "buckets" : [
        {
          "key_as_string" : "2021-12-25T05:58:30.000Z",
          "key" : 1640411910000,
          "doc_count" : 1196,
          "the_count" : {
            "value" : null
          }
        },
        {
          "key_as_string" : "2021-12-25T05:59:30.000Z",
          "key" : 1640411970000,
          "doc_count" : 1942,
          "the_count" : {
            "value" : 1196.0
          }
        },
        {
          "key_as_string" : "2021-12-25T06:00:30.000Z",
          "key" : 1640412030000,
          "doc_count" : 1802,
          "the_count" : {
            "value" : 1196.0
          }
        },
        {
          "key_as_string" : "2021-12-25T06:01:30.000Z",
          "key" : 1640412090000,
          "doc_count" : 1735,
          "the_count" : {
            "value" : 1196.0
          }
        },
        {
          "key_as_string" : "2021-12-25T06:02:30.000Z",
          "key" : 1640412150000,
          "doc_count" : 1699,
          "the_count" : {
            "value" : 1196.0
          }
        },
        {
          "key_as_string" : "2021-12-25T06:03:30.000Z",
          "key" : 1640412210000,
          "doc_count" : 1506,
          "the_count" : {
            "value" : 1196.0
          }
        }

How I can find max value in every 1hour from this response?

Oh, no. You've got the minimum of the 'document count' for each minute interval in the 15min window by the query. You have to get minimum of minimum.

You can use max/min aggregation for sub aggregation like sum aggregation in the following example, which is same as the first exmaple in the link.

curl -X POST "localhost:9200/_search?pretty" -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "my_date_histo": {                  
      "date_histogram": {
        "field": "date",
        "calendar_interval": "1M"
      },
      "aggs": {
        "the_sum": {
          "sum": { "field": "price" }   
        },
        "the_movfn": {
          "moving_fn": {
            "buckets_path": "the_sum",  
            "window": 10,
            "script": "MovingFunctions.unweightedAvg(values)"
          }
        }
      }
    }
  }
}
'

And It is recommended to adjust "shift" and "gap_policy" options according to your requirements

You probably did not understand me. I already got this answer. Now I need to find the maximum value from those received with an interval of every hour.

"aggregations" : {
    "myDatehistogram" : {
      "buckets" : [
        {
          "key_as_string" : "2021-12-24T23:59:30.000Z",
          "key" : 1640390370000,
          "doc_count" : 845,
          "the_count" : {
            "value" : null
          }
        },
        {
          "key_as_string" : "2021-12-25T00:00:30.000Z",
          "key" : 1640390430000,
          "doc_count" : 2277,
          "the_count" : {
            "value" : 845.0
          }
        },
        {
          "key_as_string" : "2021-12-25T00:01:30.000Z",
          "key" : 1640390490000,
          "doc_count" : 1839,
          "the_count" : {
            "value" : 845.0
          }
        },
        {
          "key_as_string" : "2021-12-25T00:02:30.000Z",
          "key" : 1640390550000,
          "doc_count" : 1615,
          "the_count" : {
            "value" : 845.0
          }
        },
        {
          "key_as_string" : "2021-12-25T00:03:30.000Z",
          "key" : 1640390610000,
          "doc_count" : 1474,
          "the_count" : {
            "value" : 845.0
          }
        },
        {
          "key_as_string" : "2021-12-25T00:04:30.000Z",
          "key" : 1640390670000,
          "doc_count" : 1861,
          "the_count" : {
            "value" : 845.0
          }
        }

Yes, it was difficult to understand because you did not explain the question had changed or the first question had been solved.

In order to reach a solution quickly and without detours, if the second question is your goal from the beginning, it is recommended to write that from the beginning.

The second question is a bit different from the first question to worth create a new topic. As it is not possible to use sub aggregation after moving_fn, quite different solution is needed. (Use another moving_fn aggregation and ignore unnecessary buckets that come between the necessary buckets in the later process could be an alternative plan.)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.