Scripted field add a value of 1 for every match

Hello,

New to the Elastic Stack and trying to get familiar with it. I'm not sure if this should be done on the ingest pipeline or visualization (I haven't had luck on either) but attempting to solve this with a scripted field in kibana for the time being.

I have a field "bytes" that is stored as a number. What I'm trying to do is count every occurrence of the field bytes that satisfies a requirement and return 1. This way I can sum all occurrences of of this condition in a data table. For example, I want to store occurrences field bytes greater 10000 and sum that in a data table. My scripted field for "high transfer" looks like:

if (doc['bytes'].size() == 0) return ''; if (doc["bytes"].value > 10000) { return 1; }

However this isn't producing the results in discover. Can anyone give me some advice on how to achieve this?

Also, would this be better to do as a script processor in the ingest pipeline to create a field "High Transfer" which is a count of all fields bytes greater than 10000 to be used in a sum aggregation?

Just bumping this

Hello,

I think I'm getting somewhere but could use some feedback if anyone has it. I'm trying to create an ingest pipeline with a bucket selector aggregation to get the values I need. I'm trying to create a field in my index, "High Transfer" from an ingest pipeline:

`{
  "size": 0,
  "aggs": {
    "high_transfer_sum": {
      "value_count": {
        "field": "bytes"
      },
      "aggs": {
        "high_transfer_count": {
          "bucket_selector": {
            "buckets_path": { 
              "sum": "high_transfer_count"
            },
            "script": {
              "source": "params.sum >= 10000"
            }
          }
        }
      }    
    }
  }
}`

I'm running into this error:

`{
  "error" : {
    "root_cause" : [
      {
        "type" : "aggregation_initialization_exception",
        "reason" : "Aggregator [high_transfer_sum] of type [value_count] cannot accept sub-aggregations"
      }
    ],
    "type" : "aggregation_initialization_exception",
    "reason" : "Aggregator [high_transfer_sum] of type [value_count] cannot accept sub-aggregations"
  },
  "status" : 500
}`

Trying to count the number of occurrences of the byte field with a value greater than 10000 and sum those occurrences in a new field.

Just bumping this for visibility.

I suppose this is impossible.

What's the desired result? Do you want to see a table like this?

Range | Number of documents
0-10000 | 5
>10000 | 500

This is possible using the "ranges" or "filters" aggregation in the data table visualization.

Hello,

Thanks for the reply. Essentially what I'm looking for is a table like this:

Page Hits High Transfer
test.html 65 7
page.html 23 9
link.html 64 10

I could easily sum the hits to a an html page. But what I'm also looking for is the ability to sum the total number of occurrences of "high transfer" in a table. In the example, test.html had 65 hits and 7 with a byte transfer gte 10000

Hello just bumping for visibility

It's not possible to do this as a separate column in the table (something we are working on), but you can create a separate row for it by using a "Split rows" aggregation using "Filters" - then splitting like this:

Filter         | Page      | Hits
*              | test.html | 65
High transfer  | test.html | 7
*              | page.html | 23
High transfer  | page.html | 9

Hello,

I suppose that makes sense that you can't create a separate column in a table but at the ingest/pipeline level you should at least be able to create an entirely new field in an index, right?

Theoretically if I have a field integer "bytes" being stored in my index, I should be able to "process that field" with some form of logic and say "if I see field 'bytes' in a document with a value greater than 10000, add a value of 1 to new field "High Transfer."

This is definitely a new beast for me, certainly without a having clause I've been struggling a bit but there has got to be a way to count the occurrences of a field in a document that matches some logic and store it in a new field.

Yes, that would be possible by using a scripted field, then adding a sum aggregation of this scripted field to the table. That should do the same thing adding a new column.

Hello,

Would you mind giving me some insight on how to do this? possibly an example script? Would this scripted field be applied at the index pattern or ingest pipeline?

You can do it as part of the index pattern. It would look like this:

doc['bytes'].value > 10000

Hello,

I attempted this with a scripted field in my original post, without luck. I thought doing this as a processing pipeline would be cleaner and was looking at how to facilitate this as part of a processing pipeline script to create the new field.

without luck

What's the problem?

I thought doing this as a processing pipeline would be cleaner and was looking at how to facilitate this as part of a processing pipeline script to create the new field.

You can absolutely do this as part of the processing pipeline as well (using a script processor: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/script-processor.html )

Hello,

I originally tried writing a pipeline processor it didn't work, I just received no data in my field.

The second option I tried was a scripted field in the index pattern with my code sample shown in my original post. That too didn't work.

Happen to have an example of how to write this in a process pipeline?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.