Aggregate filter, error when same task_id in two separate aggregate filters

Hi,
I'm running a Logstash 7.9.2 Docker container inside a Kubernetes cluster and I'm seeing the following error message in Logstash when I attempt to use the same task_id in two separate aggregate filters:

[ERROR] 2021-02-12 15:28:57.351 [Converge PipelineAction::Reload] agent - Failed to execute action {:id=>:"sandbox-qa", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Reload, action_result: false", :backtrace=>nil}

These are the aggregate plugins:

if "interfaces" in [name] {
    aggregate {
        task_id => "%{device}-%{interface-name}"
        push_previous_map_as_event => true
        code => "
            event.to_hash.each { |k,v|
                unless map[k]
                    map[k] = v
                end
            }
            event.cancel
        "
    }
} 

if [in-octets] and [out-octets] {
    aggregate {
        task_id => "%{device}-%{interface-name}"
        inactivity_timeout => 120
        timeout_timestamp_field => "@timestamp"
        push_map_as_event_on_timeout => true
        code => "
            event.to_hash.each { |k,v|
                unless map[k]
                    map[k] = v
                end
            }
            event.cancel
        "
    }
}

If I change the first task_id to something different, then the errors go away and the pipeline runs.

In other words, changing the first task_id to "%{interface-name}" allows the pipeline to run.

Am I allowed to have the same task_id in two separate aggregate plugins running inside the same pipeline?

Thank you.

Yes.

I suggest you set log.level to debug and see if you get a more informative error message.

Hi @Badger , these are the log lines before and after the [ERROR] event. I don't see any hints in the logs that can point me in the right direction.

[DEBUG] 2021-02-12 20:17:10.205 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOG_FATAL_CONN"=>"Fatal error: bsock.c:133 Unable to connect to (Client: %{BACULA_HOST:client}|Storage daemon) on %{HOSTNAME}:%{POSINT}. ERR=(?%{GREEDYDATA})"}
[DEBUG] 2021-02-12 20:17:10.205 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOG_NO_CONNECT"=>"Warning: bsock.c:127 Could not connect to (Client: %{BACULA_HOST:client}|Storage daemon) on %{HOSTNAME}:%{POSINT}. ERR=(?%{GREEDYDATA})"}
[DEBUG] 2021-02-12 20:17:10.205 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOG_NO_AUTH"=>"Fatal error: Unable to authenticate with File daemon at %{HOSTNAME}. Possible causes:"}
[DEBUG] 2021-02-12 20:17:10.205 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOG_NOSUIT"=>"No prior or suitable Full backup found in catalog. Doing FULL backup."}
[DEBUG] 2021-02-12 20:17:10.205 [[lab-backbone-json-interfaces]-pipeline-manager] aggregate - Aggregate register call {:code=>"\n # Handle description change\n if map['description'] == nil\n desc = event.get('description');\n if desc != ''\n map['description'] = desc;\n else\n map['description'] = 'undefined';\n end\n end\n "}
[DEBUG] 2021-02-12 20:17:10.206 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOG_NOPRIOR"=>"No prior Full backup Job record found."}
[ERROR] 2021-02-12 20:17:10.206 [Converge PipelineAction::Create] agent - Failed to execute action {:id=>:"sandbox-qa", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}
[DEBUG] 2021-02-12 20:17:10.206 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOG_JOB"=>"(Error: )?Bacula %{BACULA_HOST} %{BACULA_VERSION} \(%{BACULA_VERSION}\):"}
[DEBUG] 2021-02-12 20:17:10.207 [[lab-backbone-json-interfaces]-pipeline-manager] aggregate - Aggregate timeout for '%{device}-%{interface-name}' pattern: seconds
[DEBUG] 2021-02-12 20:17:10.207 [[qa-hub-json-interfaces]-pipeline-manager] grok - Adding pattern {"BACULA_LOGLINE"=>"%{BACULA_TIMESTAMP:bts} %{BACULA_HOST:hostname} JobId %{INT:jobid}: (%{BACULA_LOG_MAX_CAPACITY}|%{BACULA_LOG_END_VOLUME}|%{BACULA_LOG_NEW_VOLUME}|%{BACULA_LOG_NEW_LABEL}|%{BACULA_LOG_WROTE_LABEL}|%{BACULA_LOG_NEW_MOUNT}|%{BACULA_LOG_NOOPEN}|%{BACULA_LOG_NOOPENDIR}|%{BACULA_LOG_NOSTAT}|%{BACULA_LOG_NOJOBS}|%{BACULA_LOG_ALL_RECORDS_PRUNED}|%{BACULA_LOG_BEGIN_PRUNE_JOBS}|%{BACULA_LOG_BEGIN_PRUNE_FILES}|%{BACULA_LOG_PRUNED_JOBS}|%{BACULA_LOG_PRUNED_FILES}|%{BACULA_LOG_ENDPRUNE}|%{BACULA_LOG_STARTJOB}|%{BACULA_LOG_STARTRESTORE}|%{BACULA_LOG_USEDEVICE}|%{BACULA_LOG_DIFF_FS}|%{BACULA_LOG_JOBEND}|%{BACULA_LOG_NOPRUNE_JOBS}|%{BACULA_LOG_NOPRUNE_FILES}|%{BACULA_LOG_VOLUME_PREVWRITTEN}|%{BACULA_LOG_READYAPPEND}|%{BACULA_LOG_CANCELLING}|%{BACULA_LOG_MARKCANCEL}|%{BACULA_LOG_CLIENT_RBJ}|%{BACULA_LOG_VSS}|%{BACULA_LOG_MAXSTART}|%{BACULA_LOG_DUPLICATE}|%{BACULA_LOG_NOJOBSTAT}|%{BACULA_LOG_FATAL_CONN}|%{BACULA_LOG_NO_CONNECT}|%{BACULA_LOG_NO_AUTH}|%{BACULA_LOG_NOSUIT}|%{BACULA_LOG_JOB}|%{BACULA_LOG_NOPRIOR})"}
[WARN ] 2021-02-12 20:17:10.208 [[lab-backbone-json-interfaces]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary

Those messages are all for other pipelines. Are there any other messages for sandbox-qa?