Logstash aggregate task

Hi Team,
we are using aggregate function to collect all commands for a specific ssh session but recently our vendor changed the way to track session records and right now only start and stop records have the same task_id while before all records inside a session had the same task_id.
Does exit a way to work on the aggregate task to circumvent this ?

Sep 28 10:36:26 **rosboccia** ssh xxxxxxx start task_id=359 start_time=1632825418 timezone=UTC
Sep 28 10:36:32 **rosboccia** pts/0 xxxxxxx stop task_id=177959 stop_time=1632825424 service=shell protocol=op-mode cmd=show cmd-arg=interfaces
Sep 28 10:36:51 **rosboccia** pts/1 unknown s top task_id=177970 stop_time=1632825443 service=shell protocol=conf-mode cmd=show
Sep 28 10:36:51 **rosboccia** pts/0 xxxxx stop task_id=177971 stop_time=1632825443 service=shell protocol=op-mode cmd=show cmd-arg=configuration cmd-arg=commands
Sep 28 10:37:54 **rosboccia** ssh xxxxxx  stop task_id=359 stop_time=1632825506 timezone=UTC

aggregate {
task_id => "%{vyatta_ip}-%{task_id}"
code => 'if map["aggregate_commands"].nil?
map["aggregate_commands"] =
map["user_name"] = event.get("user_name")
map["config_timestamp"] = event.get("config_timestamp")
map["vyatta_ip"] = event.get("vyatta_ip")
map["from_host"] = event.get("from_host")
cmd_str = event.get("vyatta_cmd")
unless cmd_str.end_with? "\n"
cmd_str = cmd_str + "\n"
map["aggregate_commands"] << cmd_str
push_map_as_event_on_timeout => true
timeout_task_id_field => "task_id"
timeout => 300 # 5 minutes timeout
timeout_tags => ['_aggregatetimeout']

@Badger may i have your opinion on this ?
Thank you

An aggregate filter requires a field that is common to all of the events that you want to aggregate. You do not seem to have that, so aggregate will not work.

I expected this answer !
Thank you very much for the confirmation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.