I use aggregate filter. It is super powerful. But it does not work consistently. What I mean is the aggregate filter sometimes failed to aggregate certain logs. I would say in 90% of cases it works. The failure is not reproducible. It can aggregate a group of events but next time it might fail. I would like to know how to improve the success rate. Any helps appreciate.
I could include my code if anyone is interested.
# aggregate: start
if [logger] == "LOG Start" {
aggregate {
task_id => "%{id}"
code => "
map['start_time'] = event.get('@timestamp');
"
map_action => "create"
}
}
# aggregate: update
if [logger] == "LOG Stop" {
aggregate {
task_id => "%{id}"
code => "
event.set('start_time', map['start_time']);
map['stop_time'] = event.get('@timestamp');
event.set('stop_time', map['stop_time']);
"
add_tag => [ "aggregate_success" ]
map_action => "update"
end_of_task => true
timeout => 100
}
ruby {
init => "require 'time'"
code => 'event.set("transfer_time", event.get("stop_time") - event.get("start_time") )'
add_tag => [ "calculated_time_difference" ]
}
}
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.