Agg last 15 minutes returning just last 2 minutes

GET forensics/_search
{
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lte": "now"
            }
          }
        }
      ]
    }
    
  },
    "aggs": {
      "agg1": {
        "date_histogram": {
          "field": "@timestamp",
          "interval": "30s"
        }
      }
    }
  }

why it returns only data of two last minutes?

If you are speaking of the hits part of the response that's expected as only 10 hits are returned by default.

But the aggs part should give more data I think.

Im talking about Aggs as I wrote in the title and it doesn`t give me all data just last couple minutes

Could you share the full output?

BTW are you sure you want to generate 2.880 buckets?

Im showing you edited query the above Code is the output this

GET forensics/_search
{
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lte": "now"
            }
          }
        }
      ]
    }
    
  },
    "aggs": {
      "agg1": {
        "date_histogram": {
          "field": "@timestamp",
          "interval": "30s"
        },
        "aggs":{
            "sub_agg1":{
              "terms":{
                "field": "action"
              }
            }
          }
      }
    }
  }

btw 2.880 buckets what ?

Could you share the entire output, not only the aggs part? If too big, share on gist.github.com and add the link here.

2880 = 3600 seconds per hour * 24 hours for a day / 30 seconds

My original "time query " is different, I used here a a simple timeframe to simplify things

GET forensics/_search
{
  "size": 1,
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lte": "now"
            }
          }
        }
      ]
    },
    "sort" : [
        { "@timestamp" : {"order" : "asc"}}
    ]
  }
}

sorry I didnt notice my is

sort: [
    {
      "@timestamp": {"order": "desc"},
      "_id": {"order": "desc"},
    }
  ]

using the new query you gave unfortunately didn`t change much response is the same
still getting just couple minutes

here is the full query with some more properties:

GET forensics/_search
{
  "size": 0,
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lte": "now"
            }
          }
        }
      ]
    }
  },
  
    "sort": [
      {
        "@timestamp": {"order": "desc"},
        "_id": {"order": "desc"}
      }
    ],
    "aggs": {
      "agg1": {
        "date_histogram": {
          "field": "@timestamp",
          "interval": "30s"
        },
        "aggs":{
            "sub_agg1":{
              "terms":{
                "field": "action"
              }
            }
          }
      }
    }
  }

I think the objective here is to know if there is data in the past.

For this reason @dadoonet is suggesting to set a sorting by @timestamp ascending, so we can see the timestamp of the oldest hit you have in the time range.

1 Like

he didnt gave any instructions what did he want me to do ?
post the response here ?

Run the following 2 queries (no modifications) and post the full response.

GET forensics/_search
{
  "size": 1,
  "sort": [
    {
      "@timestamp": {
        "order": "asc"
      }
    }
  ],
  "query": {
    "bool": {
      "filter": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lte": "now"
            }
          }
        }
      ]
    }
  },
  "aggs": {
    "stat": {
      "stats": {
        "field": "@timestamp"
      }
    }
  }
}
GET forensics/_search
{
  "size": 0,
  "aggs": {
    "stat": {
      "stats": {
        "field": "@timestamp"
      }
    }
  }
}
1 Like

first Q:

{
  "took" : 10,
  "timed_out" : false,
  "_shards" : {
    "total" : 12,
    "successful" : 12,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 999,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [
      {
        "_index" : "forensics-2020.04.30-007132",
        "_type" : "_doc",
        "_id" : "gntsynEBABC9EwCFjvEw",
        "_score" : null,
        "_source" : {
          "attack_sev" : "High",
          "eventMap" : {
            "map" : [
              {
                "key" : "ParamName",
                "value" : "cmd"
              },
              {
                "key" : "ParamValue",
                "value" : "cmd.exe"
              },
              {
                "key" : "ParamType",
                "value" : "ParamTypeURI"
              },
              {
                "key" : "Zone",
                "value" : "Parameters"
              }
            ]
          },
          "eventSignature" : "7592",
          "rule" : "7592",
          "@timestamp" : "2020-04-30T09:30:37.177Z",
          "action" : "REPORT",
          "trafficTransId" : "34ebfb0c-77b4-4d90-a648-9adfdc5cfb3b",
          "waas_tag" : "app-httpbin",
          "related" : "",
          "dynamic" : "",
          "web_servers" : "Any",
          "waas_profile" : "radware/waas-sample-app-httpbin-profile",
          "policyClsId" : "classifier1",
          "attack" : "URL Access Violation",
          "policyVersionHash" : "4c3877395b510e1bf4636c355a36de4ad9552b9cd4945a322add51a60bd00ae1",
          "trafficUri" : "/s",
          "threat" : "Access Control",
          "eventId" : "by_pattern",
          "sourceHostname" : "waas-sample-app-httpbin-deployment-5f58dc8c9-b299b",
          "policyProtectionId" : "protection1",
          "trafficMethod" : "GET",
          "tags" : [
            "_geoip_lookup_failure"
          ],
          "description" : "Signature engine intercepted a malicious request, which includes a blocked pattern. Description: There was an attempt to retrieve Windows Applications file",
          "title" : "Pattern Violation Detected",
          "eventModule" : "Known Attacks - Signature Engine",
          "policyName" : "httpbinPolicy"
        },
        "sort" : [
          1588239037177
        ]
      }
    ]
  },
  "aggregations" : {
    "stat" : {
      "count" : 999,
      "min" : 1.588239037177E12,
      "max" : 1.588239103142E12,
      "avg" : 1.5882390699646445E12,
      "sum" : 1.58665083089468E15,
      "min_as_string" : "2020-04-30T09:30:37.177Z",
      "max_as_string" : "2020-04-30T09:31:43.142Z",
      "avg_as_string" : "2020-04-30T09:31:09.964Z",
      "sum_as_string" : "+52248-12-18T05:54:54.680Z"
    }
  }
}

secong Q:

{
  "took" : 34,
  "timed_out" : false,
  "_shards" : {
    "total" : 12,
    "successful" : 12,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1050,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "stat" : {
      "count" : 1050,
      "min" : 1.588239077157E12,
      "max" : 1.588239146136E12,
      "avg" : 1.5882391116414248E12,
      "sum" : 1.667651067223496E15,
      "min_as_string" : "2020-04-30T09:31:17.157Z",
      "max_as_string" : "2020-04-30T09:32:26.136Z",
      "avg_as_string" : "2020-04-30T09:31:51.641Z",
      "sum_as_string" : "+54815-10-04T23:33:43.496Z"
    }
  }
}

I did here: Agg last 15 minutes returning just last 2 minutes

But indeed I did not say specifically that you should not modify the request I sent and share the output. Thanks @Luca_Belluccini for being more precise.

1 Like

Given your requests, we can see:

  • The oldest document when we apply the time filter now-1d/d to now is 2020-04-30T09:30:37.177Z
  • The most recent document when we apply the time filter now-1d/d to now is 2020-04-30T09:31:43.142Z
  • Without filters, you have only 1050 documents in the forensics alias and the oldest event is 2020-04-30T09:30:37.177Z, while the most recent one is 2020-04-30T09:31:43.142Z

In the response I can spot you seem to use ILM and the rollover policy seems quite too frequent as you have forensics-2020.04.30-007132 (7132!!!) rollovers!

Maybe the ILM policy is configured to delete data too frequently, so you do no more have data past this date?

As it turns out I`m working on a development cluster and its set to delete more then 2 minutes data.
TY @Luca_Belluccini and @dadoonet for helping me figure this out.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.