Painless script for a comparison between two fields taken at interval

Hello :wave:

I take the ELK suite in hand and try to do a few things with it to learn.

Currently I'm trying to make a scripted field that will be able to determine a change between two hashes taken at two different moments,

I would like to use this field to determine if there has been a change in a file for example ans create a graph which resume this behavior.

My scripted field name is Diff.

My first data :

  "_id": "Tj_63HYBDcO2xQXMU82i",
  "@timestamp": "2020-11-16T23:09:50.047+01:00",
  "File": "System.Windows.Forms.Design.Editors.resources.dll",
  "Diff": [
   "No change"

Second data taken one day later :

  "_id": "UD_63HYBDcO2xQXMU82i",
  "@timestamp": "2020-12-16T23:19:50.047+01:00",
  "File": "System.Windows.Forms.Design.Editors.resources.dll",
  "Diff": [
   "No change"

Do you know if it is possible to make this comparison with scripted field ?
Or am I trying to do something that this tool is not made for? In that case, if you had any ideas :smiley:

Actually I did a couple of things with the help of the tutorials on the site (Great Tutorial here), but I have to admit that I don't go very far :

    return "error no hash";
} else {
    if(doc['SHA256'].value == "784ABCBD7342B9FABB1968694E384147ED1204AD006BC36749D46D395351820C"){
        return "No change"
    else {
        return "Changed"

Thanks for your time and orientation :slight_smile:

Scripted fields work on a single record and can only use the data in that record.

It sounds like you are trying to compare two different records which wouldn't be possible due to the reason above.

If you are going to hard code that hash in the scripted field like you did in your example then this should be possible. But it sounds like your use case you wouldn't want to do that.

Okay, thank you for your quick answer :slight_smile: !

In my case I want to compare theses two record.
So if I want to make a comparison between two records, how can I do it with or without scripted fields ?

There could be multiple methods of doing this during the ingestion of the data.

How are you ingesting this data? (logstash, beats, ingest pipeline, manually, etc)

I ingest the data manually.
I didn't dig the ingest pipeline aspect maybe it's more easier to do this task with it ?


Correct. An ingest pipeline is the way to go. I have not used it yet but the enrich processor could be the way to go for you.

If I understand the concept well, in fact as an incoming document I have my new document (the second data in my previous post) with my new hash.

And in the source index, I have my old index with my hash (the first data in my post), and as target index I have my index which is used for display and my dashboard ?

So I have to create an empty index to gather this new dataset isn't it ?
Did I get it right?


Thank for your patience :smiley:

Reading up on enrich processors it seems like this might not work due the rules for indexes.

  • They are system indices, meaning they’re managed internally by Elasticsearch and only intended for use with enrich processors.
  • They always begin with .enrich-* .
  • They are read-only, meaning you can’t directly change them.
  • They are force merged for fast retrieval.

This certainly can be done using Logstash and the Elasticsearch Filter but I am still looking for another way since you aren't using Logstash.

Maybe with the enrich processor, I can juxtapose my old data (the reference hash) next to my new data (the new hash), then with a scripted field make a comparison and return a boolean in case of modification for this document ?

I'm probably diverting the original use of the tool by doing this, and there's a simpler alternative.
I'm going to dig into all this, and take a look to logstash

Thanks !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.