Not an easy problem. I do not think an aggregate filter using the queue id as task_id can do it. You may be able to do it in a ruby filter, or an aggregate filter with a constant task_id, which is basically a ruby filter with some framework (timeouts) that may be useful.
The first message processed contains the queue id of the second message processed.
One approach would be to store the aggregated data for AAAAAA1 in the filter (maybe in a hash of hashes with BBBBBB2 as the key). Then when BBBBBB2 is processed, check if you have a hash entry for it. If so, join and remove.
Another would be to use a staging index. Store a document for every queue id. Then scan it periodically seeing if you can match pairs (or triplets), join them, delete them, and writing to the final resting place. This is probably more robust.
Note that writes to Elasticsearch are not immediate. If you process AAAAAA1 and then quickly try and do a lookup when processing BBBBBB2 there are several reasons why AAAAAA1 may not have been indexed (pipeline batching, output delays, index refresh delays and probably more).