Log correlation does not match trace.id from active span/transaction?

I have a use-case where tasks (Java Function<T, R> or Supplier) are potentially queued. If no task is currently running, the current thread directly executes the task. If another task is pending, the new task will be queued and the next task to arrive will execute itself and the queued task. These tasks are created from RabbitMQ messages, but that does not really matter and the same thing could be implemented with HTTP endpoints (see MWE at the end).

The problem now is that log correlation seems to be broken (?) or at least not supporting our use case. What we'd need is for logs to have the original transaction.id and trace.id of the queued task, not the ids of the processing thread. Spans are correctly recorded and linked to the original transaction in Kibana UI, but log correlation fields do not match.

Kibana version: 7.17.0

Elasticsearch version: 7.17.0

APM Server version: 7.17.0

APM Agent language and version: 1.29.0

Browser version: Chrome 98

Original install method (e.g. download page, yum, deb, from source, etc.) and version: Maven Central

Fresh install or upgraded from other version? Upgraded

Is there anything special in your setup? No special setup, log correlation does not match for a simple MWE.

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

Steps to reproduce:

  1. Queue task which activates a different transaction / activates a child span from a different transaction
  2. Write logs with enable_log_correlation=true inside the task
  3. transaction.id and trace.id in logs do not match the trace.id of the active span

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

MWE:

The problem can be reproduced with a simple Spring Boot application (latest Spring Boot)

@SpringBootApplication
@RestController
public class CorrelationApplication {
    private static final Logger LOG = LoggerFactory.getLogger(CorrelationApplication.class);

    private Transaction transaction;

    public static void main(final String[] args) {
        ElasticApmAttacher.attach();
        SpringApplication.run(CorrelationApplication.class, args);
    }

    @GetMapping("queue")
    public String queue() {
        LOG.info("START");
        transaction = ElasticApm.currentTransaction();
        log("start1", transaction, null);
        doWork("start1");

        return "started and cached tx " + f(transaction);
    }

    @GetMapping("process")
    public String process() throws Exception {
        LOG.info("PROCESS");
        if (transaction == null) {
            return "no qeueued transaction";
        }

        try (Scope activate = transaction.activate()) {
            LOG.info("TX current {}, processing {}", f(ElasticApm.currentTransaction()), f(transaction));
            final Span span = transaction.startSpan().setName("process");
            log("process1", transaction, span);
            doWork("process1");
            try (Scope unused = span.activate()) {
                LOG.info("SPAN current {}, processing {}", f(ElasticApm.currentSpan()), f(span));
                log("process2", transaction, span);
                doWork("process2");
            } finally {
                span.end();
            }
            log("process3", transaction, span);
            doWork("process3");
        }

        return "process cached tx " + f(transaction);
    }

    private void log(final String info, final Transaction tx, final Span span) {
        LOG.info("{} | current tx {} span {} | tx {} span {}", info, f(ElasticApm.currentTransaction()), f(ElasticApm.currentSpan()), f(tx), f(span));
    }

    private static String f(final Span span) {
        if (span == null) {
            return "(null)";
        }
        return span.getId() + '/' + span.getTraceId();
    }

    @CaptureSpan("inner span")
    private void doWork(final String name) {
        ElasticApm.currentSpan().setName(name);
        LockSupport.parkNanos(TimeUnit.MILLISECONDS.toNanos(100));
    }
}

Call with: curl localhost:8080/queue ; sleep 0.1 ; curl localhost:8080/process

Log output:

2022-03-07 14:08:09 [transaction.id=683f509c666ab749, trace.id=7ddca9e8ea4c65f85eb0cb0adddd6966] - START
2022-03-07 14:08:09 [transaction.id=683f509c666ab749, trace.id=7ddca9e8ea4c65f85eb0cb0adddd6966] - start1 | current tx 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 span 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 | tx 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 span (null)
2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - PROCESS
2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - TX current d214ef9b88093236/fb6ad8d173d2b49d29db61aa38faa54a, processing 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966
2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - process1 | current tx d214ef9b88093236/fb6ad8d173d2b49d29db61aa38faa54a span 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 | tx 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 span edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966
2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - SPAN current edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966, processing edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966
2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - process2 | current tx d214ef9b88093236/fb6ad8d173d2b49d29db61aa38faa54a span edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966 | tx 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 span edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966
2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - process3 | current tx d214ef9b88093236/fb6ad8d173d2b49d29db61aa38faa54a span 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 | tx 683f509c666ab749/7ddca9e8ea4c65f85eb0cb0adddd6966 span edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966

I'd expect "process1" or at least "proces2" to write the same log correlation values as the first call of the queue() method, but the values are still the transaction and trace id from the second HTTP call.

application.properties:

logging.pattern.console= %d{yyyy-MM-dd HH:mm:ss} [%mdc] - %msg%n

elasticapm.properties:

enable_log_correlation=true

Related discussions:

Maybe this is not supported?

After discussion in Public API ignore transaction option · Issue #845 · elastic/apm-agent-java · GitHub, I've had a look at BlockingQueueContextPropagationTest.java, which handles the resumption of a "detached transaction" in a queue processor, but it does not mention anything about log correlation.

Traces show correctly in APM UI, but being able to correlate logs from asynchronous tasks would be even more helpful for us.

Thank you for your help!

Hi @efdknittlfrank and welcome to our forum!

I am not sure about the original RabbitMQ issue, but at least the example you provided treats transactions in a way that doesn't really fit the data model basic assumptions.
When you activate the saved (and already ended) transaction within process(), there is already an active transaction created by the inherent Spring Boot instrumentation. This is conceptually wrong, as there should be one active transaction on a thread at most. I believe that if you try this snapshot, you would get what you expect, but note that there may be other issues related to activating a transaction when there is already an active one.

After discussion in Public API ignore transaction option · Issue #845 · elastic/apm-agent-java · GitHub , I've had a look at BlockingQueueContextPropagationTest.java, which handles the resumption of a "detached transaction" in a queue processor, but it does not mention anything about log correlation.

Did you try it out? I really think that is what you should do. Log correlation should have worked out of the box if you activate/deactivate queue tasks properly, but maybe it didn't work because of how the log correlation worked before. If you did try it and still have the ability to test it, please try it now with this snapshot of the newly implemented log correlation mechanism.

I hope this helps

1 Like

Hello @Eyal_Koren

thanks for your reply and link to the snapshot.

I included resuming of the ended transaction for completeness. Perhaps I should have removed re-activating the transaction from the MWE (but it does not affect behavior). There is also a new span being created from the ended transaction. This span shows correctly in the Kibana APM UI (under the ended transaction), but its logs correlate to the new transaction. It does not matter which of the two transactions is active, the logs of the span always correlate to the new transaction.

This can be seen from the following line in the log:

2022-03-07 14:08:10 [transaction.id=d214ef9b88093236, trace.id=fb6ad8d173d2b49d29db61aa38faa54a] - SPAN current edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966, processing edda2140408eb496/7ddca9e8ea4c65f85eb0cb0adddd6966

The correlated trace.id in the MDC (i.e. fb6ad8d173d2b49d29db61aa38faa54a) does not match ElasticApm.currentSpan().getTraceId() (i.e. 7ddca9e8ea4c65f85eb0cb0adddd6966).

Unfortunately, I cannot "activate/deactivate" queue tasks properly. (Queued) tasks are always executed on an already-activated (potentially different) transaction: if one task is in progress, other tasks will be added to a "queue" (rather a simple List—there's no separate thread pool to which tasks are submitted). When the next task arrives and finds queued tasks, those queued tasks will be executed before the new task, but all within the same transaction. It is a weird design, but that's how it is. Our current workaround is as follows (also explained in the GitHub issue):

  1. Disable automatic instrumentation of rabbitmq code
  2. Manually create and start a transaction when the message is received
  3. Manually end the transaction before tasks are being queued (or processed)
  4. Create and start another transaction when the task is being processed
  5. Manually end the transaction when task processing for this single task has finished

This means that other rabbitmq code cannot use the agent's instrumentation either. Possible (naive) solutions for our use case:

  1. Correlate logs with active span instead of active transaction
  2. Being able to "overwrite" the active transaction
  3. Being able to "ignore" default instrumentations in some code paths
  4. Being able to prematurely "end" an agent transaction manually

I'm mostly interested in having the correct trace.id value in the log correlation: Filtering logs with the trace.id of a Kibana APM UI view should give me the logs that correlate to the spans that are visible in the trace. Currently, only the original spans are shown, "detached/alien spans" will have the "wrong" trace.id value (i.e. of the active transaction, not their parent transaction).

TBH I'm not even sure what value transaction.id should have in that case: the id of the active transaction or the id of the parent transaction?

I will try the provided snapshot and see if the behavior is different and matches my use case better. I will then provide additional details.

Hopefully, anything I wrote makes at least some sense :slight_smile:

Thanks again

Hi @Eyal_Koren

I have now tested with the provided snapshot, but unfortunately the behavior is identical.

2022-03-09 11:44:39,548 [Attach Listener] INFO  co.elastic.apm.agent.configuration.StartupInfo - Starting Elastic APM 1.29.1-SNAPSHOT.UNKNOWN as performance-demo on Java 11.0.11 Runtime version: 11.0.11+9 VM version: 11.0.11+9 (AdoptOpenJDK) Windows 10 10.0

I have rewritten the MWE to not activate the ended transaction and start a new span inside the task. It is now also closer to our production code (behavior is identical for both jar versions)

@SpringBootApplication
@RestController
public class CorrelationApplication2 {
    private static final Logger LOG = LoggerFactory.getLogger(CorrelationApplication2.class);

    private Supplier<String> task;

    public static void main(final String[] args) {
        ElasticApmAttacher.attach();
        SpringApplication.run(CorrelationApplication2.class, args);
    }

    @GetMapping("queue")
    public String queue() {
        LOG.info("QUEUE");
        final var transaction = ElasticApm.currentTransaction();
        log("queue1", transaction, null);

        // queue task
        task = () -> {
            final Span span = transaction.startSpan().setName("task");
            log("task1", transaction, span);
            doWork("task1");
            try (Scope unused = span.activate()) {
                LOG.info("SPAN current {}, processing {}", f(ElasticApm.currentSpan()), f(span));
                log("task2", transaction, span);
                doWork("task2");
            } finally {
                span.end();
            }
            log("task3", transaction, span);
            doWork("task3");
            return span.getTraceId();
        };

        LOG.info("queued and cached task from tx {}", f(transaction));
        return "queued and cached task from tx " + f(transaction);
    }

    @GetMapping("process")
    public String process() {
        LOG.info("PROCESS");
        if (task == null) {
            return "no queued task";
        }

        // execute queued task
        final String queuedTraceId = task.get();

        LOG.info("processed queued task from trace id {}", queuedTraceId);
        return "process queued task " + queuedTraceId;
    }

    private void log(final String info, final Transaction tx, final Span span) {
        LOG.info(
                "{} | current tx {} span {} | tx {} span {}",
                info,
                f(ElasticApm.currentTransaction()),
                f(ElasticApm.currentSpan()),
                f(tx),
                f(span));
    }

    private static String f(final Span span) {
        if (span == null) {
            return "(null)";
        }
        return span.getId() + '/' + span.getTraceId();
    }

    @CaptureSpan("inner span")
    private void doWork(final String name) {
        ElasticApm.currentSpan().setName(name);
        LockSupport.parkNanos(TimeUnit.MILLISECONDS.toNanos(100));
    }
}

Traces are still correctly linked to their parent (i.e. non-active) transaction in APM UI:

However, the log correlation shows the logs of the spans ("task" and "task2") with a different trace.id:

2022-03-09 11:44:53 [transaction.id=20e20898f03b793c, trace.id=7a9f1300ad18e1eb208f195c161df9a0] - QUEUE
2022-03-09 11:44:53 [transaction.id=20e20898f03b793c, trace.id=7a9f1300ad18e1eb208f195c161df9a0] - queue1 | current tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 span 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 | tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 span (null)
2022-03-09 11:44:53 [transaction.id=20e20898f03b793c, trace.id=7a9f1300ad18e1eb208f195c161df9a0] - queued and cached task from tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - PROCESS
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - task1 | current tx 5e54145c5e517dd0/473134d3265dfff20114e2452582c75a span 5e54145c5e517dd0/473134d3265dfff20114e2452582c75a | tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 span 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - SPAN current 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0, processing 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - task2 | current tx 5e54145c5e517dd0/473134d3265dfff20114e2452582c75a span 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0 | tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 span 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - task3 | current tx 5e54145c5e517dd0/473134d3265dfff20114e2452582c75a span 5e54145c5e517dd0/473134d3265dfff20114e2452582c75a | tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 span 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - processed queued task from trace id 7a9f1300ad18e1eb208f195c161df9a0

Note the lines:

...
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - SPAN current 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0, processing 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0
2022-03-09 11:44:54 [transaction.id=5e54145c5e517dd0, trace.id=473134d3265dfff20114e2452582c75a] - task2 | current tx 5e54145c5e517dd0/473134d3265dfff20114e2452582c75a span 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0 | tx 20e20898f03b793c/7a9f1300ad18e1eb208f195c161df9a0 span 5ae776ff7efcaef5/7a9f1300ad18e1eb208f195c161df9a0
...

The trace.id of the active span is 7a9f1300ad18e1eb208f195c161df9a0 (and can be verified in APM UI and by looking at the document in the apm-*-span-* index), yet the MDC trace.id in the log is 473134d3265dfff20114e2452582c75a. I'd expect these IDs to match—anything outside of this span can have different trace/transaction ids, I don't care. Maybe my expectation is nonsensical and my premise is flawed? Perhaps this isn't supported and shouldn't be supported?

One question I've been asking myself is: Should log be correlated based on active transaction or active span?

Thanks
Daniel

The problem in both implementations of log correlation is that we rely on Tracer.currentTransaction to add data to the MDC. The implementation of this one is look at the bottom of the activation stack, where it would find the initially activated transaction.

One question I've been asking myself is: Should log be correlated based on active transaction or active span ?

Excellent question that encapsulates the issue - we do not expect a scenario where a span/transaction gets activated when there's an unrelated active transaction on the thread. So relying on the trace.id and transaction.id of the active span should be the same as relying on the active transaction.

So the right fix is not avoid activating a transaction that needs to be activated, but it is the opposite - you should deactivate the transaction before activating the one you want to continue.
A clarification on that - it's fine to activate an ended transactions and create spans for it, it will only show as you see already as asynchronous in the waterfall view.

Overall, even if the above fixes your issue, I would go again through our data model and the concept of activation/deactivation and review how to find a better way to express your app design through it. Maybe there is no better way, but worth rethinking about it. You may find a way to accommodate everything more neatly and even leverage the auto RabbitMQ instrumentation.

Thanks for your answer @Eyal_Koren

So the right fix is not avoid activating a transaction that needs to be activated, but it is the opposite - you should deactivate the transaction before activating the one you want to continue.

I don't understand this point. I cannot deactivate the transaction, because it was created automatically by the agent. ElasticApm.currentTransaction().end() does not work and will log a warning when the agent tries to eventually the transaction. This would basically require one of my proposed solutions from an earlier post:

This means that other rabbitmq code cannot use the agent's instrumentation either. Possible (naive) solutions for our use case:

  1. Correlate logs with active span instead of active transaction
  2. Being able to "overwrite" the active transaction
  3. Being able to "ignore" default instrumentations in some code paths
  4. Being able to prematurely "end" an agent transaction manually

But maybe I'm misunderstanding. How would I go about disabling/ending a transaction that my code didn't create in the first place, but was created and activated by automatic instrumentation? Basically, the "task" would need to be "detached" from the current/active transaction (either by disabling/ending it or overwriting it with a different transaction).

I'm aware of the data model and—I think—I generally understand how spans and transactions are activated and deactivated (via AutoClosable Scope instances). But I'm not sure how this is going to help me here.

My problem could also be restated as: "I'm writing logs inside an active span, but the logs have a different trace.id. In other words: Logs do not match the waterfall diagram."

Generally, the agent does the right thing™ 99% of the time and I'm leveraging it to great effect. It's just this very specific implementation where it is impossible to tell the agent how to correlate logs properly. But even here, we have already implemented a (complicated, dirty) workaround: disable automatic instrumentation, start and end tx before queuing, start and end tx inside task. The problem is that there's already a transaction (by agent instrumentation) and I cannot deactivate this transaction.

I guess for now, I have to stay with our workaround. Eventually, our application can be rewritten to better accommodate for the agent's default instrumentation (e.g. separate executors/threadpools), but at the moment that's not planned. I understand that the agent has different goals than our application and must work for the majority of cases, even if that means that some cases cannot be supported.

Your help is much appreciated, thanks.

You are right, my bad, the API doesn't allow that and doesn't exposes the active scope if you didn't do the activation.
As for your proposals:

  1. Correlate logs with active span instead of active transaction

As I said, we can do that, but we never expect them to be different. Let me think about that. It may make sense to do that without losing anything on a global agent view and still accommodate for your special use case.

  1. Being able to "overwrite" the active transaction

What does that mean? What happens with the one that was active before?

  1. Being able to "ignore" default instrumentations in some code paths

We have that in some cases. In messaging there is ignore_message_queues but it probably won't answer your need.

  1. Being able to prematurely "end" an agent transaction manually

This won't do it. What you need is a way to deactivate manually and then reactivate. We currently don't plan to add that, but we may consider in the future.

Sorry, I didn't mean to suggest you don't, you seem very knowledgable with the matter, but I think that rethinking the approach can lead to better results. But it may also not possible.

I'll get back to you with an answer about using current span instead of transaction

1 Like

You are right, my bad, the API doesn't allow that and doesn't exposes the active scope if you didn't do the activation.

Okay good, I thought I overlooked something from the API docs :slight_smile:

  1. Being able to "overwrite" the active transaction

What does that mean? What happens with the one that was active before?

This was badly expressed. What I meant was that it is possible to activate a transaction, while another transaction was already active. The new transaction will be "active" until it gets deactivated again. Then the original transaction becomes the active one again. In other words a "stack of transactions". IIRC this was already discussed in the past somewhere, but doesn't really fit the agent implementation/view.

It could also be seen similar to your answer regarding point 4 (i.e. manually end an agent transaction): deactivate, then reactivate. The observable effect to the end user would be the same/similar: a different transaction becomes temporarily active and the initial transaction is "suspended" until the temporary transaction is deactivated again.

Sorry, I didn't mean to suggest you don't,

No offense taken, don't worry :slight_smile:

  1. Correlate logs with active span instead of active transaction

As I said, we can do that, but we never expect them to be different. Let me think about that. It may make sense to do that without losing anything on a global agent view and still accommodate for your special use case.

[…]

I'll get back to you with an answer about using current span instead of transaction

Thanks a lot. Looking forward to the answer, even if it will be that it is not going to be supported by the agent.

Will that work for the rabbitmq instrumentation too? It might be a viable workaround for us, as our queues follow a naming pattern. I will look into that, thanks!

The RabbitMQ instrumentation should respect this configuration

1 Like

@efdknittlfrank please try this snapshot - it uses the active span, rather than relying on the internal currentTransaction API. I hope it resolves this issue for you

1 Like

YES! This snapshot now produces MDC values that match my expectation. Anything logged within an activated span takes the trace.id (and transaction.id) of this span. Once the span becomes deactivated, the original trace and transaction id are logged again (as expected).

2022-03-10 08:56:24 [transaction.id=6cc089a16b42c9ce, trace.id=d27b20bc07aaf6920c174f9d79e8bdb6] - QUEUE
2022-03-10 08:56:24 [transaction.id=6cc089a16b42c9ce, trace.id=d27b20bc07aaf6920c174f9d79e8bdb6] - queue1 | current tx 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6 span 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6 | tx 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6 span (null)
2022-03-10 08:56:24 [transaction.id=6cc089a16b42c9ce, trace.id=d27b20bc07aaf6920c174f9d79e8bdb6] - queued and cached task from tx 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6
2022-03-10 08:56:24 [transaction.id=98ac7ec594b28fef, trace.id=672e1e0475b05b3cd9d431ba3846b03e] - PROCESS
2022-03-10 08:56:24 [transaction.id=98ac7ec594b28fef, trace.id=672e1e0475b05b3cd9d431ba3846b03e] - task1 | current tx 98ac7ec594b28fef/672e1e0475b05b3cd9d431ba3846b03e span 98ac7ec594b28fef/672e1e0475b05b3cd9d431ba3846b03e | tx 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6 span 1f99323f4db1e8e6/d27b20bc07aaf6920c174f9d79e8bdb6
2022-03-10 08:56:24 [transaction.id=6cc089a16b42c9ce, trace.id=d27b20bc07aaf6920c174f9d79e8bdb6] - SPAN current 1f99323f4db1e8e6/d27b20bc07aaf6920c174f9d79e8bdb6, processing 1f99323f4db1e8e6/d27b20bc07aaf6920c174f9d79e8bdb6
2022-03-10 08:56:24 [transaction.id=6cc089a16b42c9ce, trace.id=d27b20bc07aaf6920c174f9d79e8bdb6] - task2 | current tx 98ac7ec594b28fef/672e1e0475b05b3cd9d431ba3846b03e span 1f99323f4db1e8e6/d27b20bc07aaf6920c174f9d79e8bdb6 | tx 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6 span 1f99323f4db1e8e6/d27b20bc07aaf6920c174f9d79e8bdb6
2022-03-10 08:56:24 [transaction.id=98ac7ec594b28fef, trace.id=672e1e0475b05b3cd9d431ba3846b03e] - task3 | current tx 98ac7ec594b28fef/672e1e0475b05b3cd9d431ba3846b03e span 98ac7ec594b28fef/672e1e0475b05b3cd9d431ba3846b03e | tx 6cc089a16b42c9ce/d27b20bc07aaf6920c174f9d79e8bdb6 span 1f99323f4db1e8e6/d27b20bc07aaf6920c174f9d79e8bdb6
2022-03-10 08:56:25 [transaction.id=98ac7ec594b28fef, trace.id=672e1e0475b05b3cd9d431ba3846b03e] - processed queued task from trace id d27b20bc07aaf6920c174f9d79e8bdb6

At least in the MWE demo application, this solves the issue 100%. I don't expect a different behavior with the real application.

I hope this change does not introduce other unexpected behavior in the agent.

1 Like

That's very pleasing to hear! :tada:
I hope it fully resolves the issue in your real app as well.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.