I have a requirement to create a fingerprint per log and chain it with the fingerprint of the previous log entry. So far I have been able to do something closer by using the "aggregate filter" and using a common task for all the logs, so I can update each entry based on a mapped value before the timeout is triggered. However, this approach is not valid for chaining logs from different "aggregate" executions per timeout.
I have been also thinking on redirecting the output to Elastic Search so I can then use it as input, but I have no idea on how to get the previous log entry.
Is there any way to achieve this?