To check how it works i added "test" field, and the result is that test field is created but test2 field and _aggregatetimeout tag is never created. I dont know why code isn't executed. It seems that i didnt achieved timeout but how is it possible?
Is that field set on the event or not? You say it is and you also say it is not. If it is not then the most likely answer is that the [_id] field does not exist.
The timeout does not always occur. Is the pipeline still running or has it terminated because it has completed all of its work. What inputs are you using?
Pipeline is still running, there is no end event as in 3rd example. At the beggining I have a pipeline for all network logs, when there is action==UPDOWN, I add "state" tagthe and in output for "state" logs i have pipeline output. For state logs i have second pipeline with pipeline input:
ehh so stupid error, i spent hours to find the problem... "timeout_tags" doesn't add another tag but overwrite everything, i have if statement in output which check tags before writing to index, so when aggregate filter overwrite tags values pipeline couldn't write that data to my index
That is not quite true. When a timeout occurs a new event is created using the map entry. Unless you added a field called [tags] to the map it will not exist. If you changed the filter to use
code => 'map["tags"] = [ "foo", "bar" ];'
then I would be very, very surprised if that got overwritten rather than having a third tag added. (I always use single quotes around the code block in case I want to use Ruby string magic, which only works in strings surrounded with double quotes.)
Even now I sometimes forget this and it is not until I see the timeout events I realize they do not have some of the fields common to all the original events.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.