How to add the document id as currentdatetime in transform

the transforma is running on every 5 sec, and the document is overwritng in the group by combination.

Can you please suggest to create the record as separate one for each 5 sec.
is it possible to add _id with current datetime. so the each time the record will be different one.

Thansk
Anji

Can you elaborate a little more here please. Sharing the details of the transform would be helpful.

As already stated, for understanding your problem it would be good if you can share your config and maybe some example data. However, I think what you miss is another group_by level. I assume you group_by term, all you need is to add another group_by with date_histogram configured with fixed_interval: "5s". This will automatically add a date field with the start time of the bucket.

Hi Hendrick,

if we group _by with 5s, the documents considered for the aggregation will be taken in that 5s only.
i want to consider previous documents, and runt every 5s.

Let me clearly explain the scenario,

Hi,

i have the documents in source index like

document 1: duration :10, callstatus : open, id : id1

document 2 : duration :15, callstatus : open, id : id1

document 3 : duration :20, callstatus: closed, id: id1

document 4: duration :10, callstatus : open, id : id2

document 5 : duration :15, callstatus : open, id : id2

now i run the transform to get the number of open calls for each 5 sec,

now i run the transform using scripted metric i calculate the number of open calls., it would give 1 call and create index as a document timestamp, noofopencalls,

2nd time i ran and the document is over written as same as no new record, or the transform will not run as no new record.

now new record came as

document 6 : duration :15, callstatus : closed, id : id2

now 3rd time the transform run and the document is over written, as no of calls as 0,

in this case i need the documents for each 5s, when the transform runs. to taol 3 documents in new index, but it is having anytime the document will be one as it is updating same document,

i cannot do group by 5s, as it consider the documents only in the time window and it will not give the result that number of open calls ( to calculate this need to consider previous records also)

Expected result in destination index is

documnet 1: timestamp : xx-xx-xxxx:xx, noofcalls:1

documnet 2: timestamp : xx-xx-xxxx:xx, noofcalls:1

documnet 3: timestamp : xx-xx-xxxx:xx, noofcalls:0

Transform is not made for this usecase, because updating the document is part of the design.

However, you can instead of writing directly to an index attach an ingest pipeline. As part of an ingest pipeline you have access to all fields including internal ones and you can use scripts. That means it is possible to change internal fields like _id, _index, etc.
As said, this is not the way transform should be used and therefore considered unsupported, but this should work.

Thank you so much Hendrik for the clarification and help.

I have another scenario. Could you please clarify on thi scenarios

Hi,
I have the scenario as below
received record at 9:10 a.m. with the status as Logged_In
I need to run the summarized record at every 15 mins and inthat interval how much time he is logged in.
so i executed the transform at 9:15 and added the summary period is 5 mins.( as the 9:10 record is considered in transform)
New record is not received at 9:30 when the next transform to be executed.
i need calculate the summary period as 15 min in this transform run.

Transform only updates a document if there is a change, so it won't re-create a document and e.g. update time_logged_in from 5m to 15m.

For this case I suggest to create a field session_start and store the time of the login. Your application can calculate the session time based on the current system time and session_start. Once transform receives the logged out event, it can set the session_end field and calculate the session time.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.