Extracting time from @timestamp field using Runtime

I understand Scripted Fields have been deprecated since 7.13.

The replacement is now Runtime.

I have a @timestamp field
image

How can I extract out the time component into a new field @timeofday - such that when I filter by @timeofday, I can filter by time range.

I was reading

Lucene Expressions Language | Elasticsearch Guide [5.0] | Elastic

and

But still feel abit lost on how complicated an approach I need to do. Its actually a simple mod because I'm relying on data in existing columns to derive new ones.

Index Management

Since this new field is an in-line addition. @timeofday is non-existent in my Index Template.

Should I ingest more months of data (1 index per month), how can ensure this new @timeofday field will be calculated and included in the new data so that when I Discover, everything is seamless / consistent.

What version are you on?

Also what do you want for Time of Day?

HH:mm:ss?

If you want to do a range, it will need to be something more like Number ... so I am not sure exactly what you want... I will take a look.

In the Data View it is pretty easy

You can test it

Then if you really want to use it in queries etc... then you need to actually add it to the mapping / template... from the docs you link above using a runtime field not the lucene expression. (BTW you linked to a really old document)

Sorry.

I'm now on 8.11

Yes HH:mm:ss is what im looking for

Then if you really want to use it in queries etc... then you need to actually add it to the mapping / template

Sounds like I will need to do a re-indexing of my 6 months of data just to update my mapping / template with this new @timestamp field?

If I use the console to do the Update, i understand the existing data doesn't get updated with this new field?

Since the last time we discussed during the setup phase, I've continued to keep my data only at 6 months worth.

So you need to look at examples here...

And the API Here...

OK we can do that as a keyword... filtering and sorting you will need to see if that works... You will need to figure out the timezone stuff if you want to...

ZonedDateTime zdt = doc['@timestamp'].value;
String datetime = zdt.format(DateTimeFormatter.ISO_LOCAL_TIME);
emit(datetime);

1 Like

I just tried it out.

Keyword is not my desired type.

Need it to be in Date format so that you can filter in between 2 timings that will just be HH:mm:ss

Yeah the timezone looks tricky to account for

:slight_smile: Just and Hour / Minutes is not a Date format....

You are not going to be able to use that ....

You can create a custom filter... using a range query filter

You can read about that

{
  "bool": {
    "must": [
      {
        "range": {
          "time_of_day": {
            "gte": "15:51:45.468"
          }
        }
      }
    ]
  }
}

I will try explore this.

Basically with my 6 months of logs, which contain entries 24/7

I want to do such that I filter away entries lasting from 0100-0430hrs in early morning, for every day (ie. 1 Jan - 30 June) (and likewise scaling this to more days when I pipe in additional data.

I wanted my Data View to omit 0100-0430hrs for every day.

EDIT: I tried using @timestamp to Query DSL but it didn't work. Wasn't as straight fwd to only read the time component.

Do I really need the new column to do this filtering?

Welll Depends on what you actually want to do... as you are only providing partial information :slight_smile:

If you want to filter ... yes you will need to create another field

If you want to aggregate you could use a Date Histogram / Visualization then no you do not

Logically I want to filter for entries NOT within 00:30:00 and 05:00:00

I used must_not and it worked

image

DSL

{
  "bool": {
    "must_not": [
      {
        "range": {
          "@time_of_day": {
            "gte": "00:30:00.00",
            "lte": "05:00:00.00"
          }
        }
      }
    ]
  }
}

Runtime field

ZonedDateTime zdt = doc['@timestamp'].value;
ZonedDateTime Updatedzdt = zdt.plusHours(8);
String datetime = Updatedzdt.format(DateTimeFormatter.ISO_LOCAL_TIME);

emit(datetime);

I can appreciate the result and extent of convenience. So the new field is only present in that specific dataview.

Ok, I'm now thinking further if I could try integrating this into Index Template and make @time_of_day a permanent "scripted" column that will exist inside the Index Template.

It will extract the hh:mm:ss.xxx component from @timestamp and translate to the new column @time_of_day

But yet in the raw csv files, the @time_of_day column will not exist.

Possible can this be done?

Add the mapping to the template and a script process to the ingest pipeline... and you do not need the @ for every timestamp / date that is just and nameing convtion for the special field @timestamp

take a look at this...

POST _ingest/pipeline/_simulate
{
  "pipeline": {
    "processors": [
      {
        "script": {
          "lang": "painless",
          "source": """ZonedDateTime zdt = ZonedDateTime.parse(ctx['@timestamp'], DateTimeFormatter.ISO_ZONED_DATE_TIME);
          String time_of_day = zdt.format(DateTimeFormatter.ISO_LOCAL_TIME);
          ctx['time_of_day'] = time_of_day;
          """
        }
      }
    ]
  },
  "docs": [
    {
      "_source": {
        "@timestamp": "2023-12-04T03:55:39.219Z"
      }
    }
  ]
}

# result

{
  "docs": [
    {
      "doc": {
        "_index": "_index",
        "_version": "-3",
        "_id": "_id",
        "_source": {
          "time_of_day": "03:55:39.219",
          "@timestamp": "2023-12-04T03:55:39.219Z"
        },
        "_ingest": {
          "timestamp": "2023-12-04T04:50:52.377145525Z"
        }
      }
    }
  ]
}
1 Like

I modified my field script slightly to address the timezone offset of 8 hours. This checked well for me.

ZonedDateTime zdt = doc['@timestamp'].value;
ZonedDateTime Updatedzdt = zdt.plusHours(8);
String datetime = Updatedzdt.format(DateTimeFormatter.ISO_LOCAL_TIME);

emit(datetime);

Let me try your code later for mapping and ingest.

On this note, I understand that even if I update the mapping, existing data will not be re-indexed by the new change?

Because say if I remove the @time_of_day from Discover, the current dataset from Jan-June 2023 will not have this column and will not pick up the new time_of_day that's now present in Index Template?

I come from a "housekeeping / consolidation" pov, if I already had a permanent code inside Index Template, then I technically won't need the "makeshift" field I created on Discover which is at a more front end portion of things.

You can add a runtime field to the mapping then you do not need to reload the data.

Or you can use an ingest pipeline and reload the data.

Pretty much your two choices

Is This Console entry considered the Runtime method?

Even tho its

POST _ingest/pipeline/_simulate
{
  "pipeline": {

I still see

"script": {
          "lang": "painless",

No as the API indicates that's an ingest pipeline so that happens at ingest.

Feel free to read the docs...

There is a section on runtime fields

And there's another section on ingest pipelines.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.