Validation of APM-approach across transactions and spans

Would like to validate my approach for APM.

We have a web-server application.
Inside that application we can identify the message being send to a JMS queue.
It is possible to identify:

A. the application sending the message to the JMS queue.

B. a unique message Identifier for this message.
We are able to assign this inside a identifier to a Transaction Name, inside the application.

C. The application treating the message from the JMS queue.
This application is running in a different process, different JAVA-VM.
We are able to capture the same message identifier and assure it keeps on being processed inside APM under the same Transaction Name.

D. Most of the processing is of a pattern of request/reply towards/from JMS.
Under all circumstances we are able to track the messageID and assign it to the transactionName.

What we would like to do is the following:

A. Identify the transaction context (with the messageID as the transaction Name).
B. Define a new span for every step in the processing, with identification and associated measurement.
C. Inside Kibana group the APM-SPAN's under a single Transaction Name.

Is this a valid approach ?

The thing which worries me is that the initial tests indicate a new transaction-ID is generated every time a transaction is opened.
Where I would prefer a new transaction ID is generated for every new transaction with the same name.
When I execute this code with the same content of TransactionName
transaction.setName(TransactionName);
ID = transaction.getId();
transaction.setType(Transaction.TYPE_REQUEST);
System.out.println(ID);
System.out.println(TransactionName);
System.out.println(SpanID);

I get a different ID, this is not what I prefer.

8f6f4f3b849ece99
XXXTransName (this context name is known across my applications)
SSSSpanName

34658f65d0d65328 (different ID)
XXXTransName (this context name is kept the same)
SSSSpanName

Hi and thanks for giving our APM solution a try.

Your JMS approach is valid and we are soon to merge a PR that will provide you with a powerful way of extracting context String in the sender side and using it to create a span that uses the same trace context on the receiver side. You can already use our OpenTracing bridge for that purpose with the Inject and Extract APIs, but see this caveat in this regard. In either case, the idea is that you don't need to realize IDs etc. but let the agent handle the context maintenance.
I think you can use the JMS message property to move the String between sender and receiver.

As for the Transaction ID, I am not sure I got what you explained. We manage the IDs in a way that enable us realize all the links between all trace parts. In addition, we have the transaction name which is what we are using later for aggregation of the data in the UI, so eventually you will see all instances of request handling (each has its unique ID) aggregated based on the transaction name.

I hope it makes sense. Good luck.

It is good to hear we are thinking within similar pattersn.
If the reports get assembled based on the "transaction name" and not on the "internal transaction identifier" we may be in good shape.

The ambiguity I try to understand is the difference between this call:
Transaction transaction = ElasticApm.startTransaction();
And this other one:
Transaction transaction = ElasticApm.currentTransaction();

In the second case I should be able to set the context and obtain the same identifier, and like that "explain" Kibana this is in the same context.

The opentracing bridge is just another layer of complexity,
so for maintenance reasons I would prefer to skip that.

The ambiguity I try to understand is the difference between this call:
Transaction transaction = ElasticApm.startTransaction();
And this other one:
Transaction transaction = ElasticApm.currentTransaction();

Not sure how this is ambiguous - startTransaction is the creation of the transaction when a JVM starts handling a request or receiving a JMS message. currentTransaction would return the active transaction anywhere else in your code, as long as you activated it. Please read through the public API documentation as it is important you fully understand it so that you get what you want.

The opentracing bridge is just another layer of complexity,
so for maintenance reasons I would prefer to skip that.

Then please wait a little longer to get the related extension of the APIs and use them.

The main issue I try to understand is we have one process to start a transaction, inside one Java-VM. Another process will continue with the same transaction, but in another Java-VM.

Have already turned the docs inside/out, but I will have another read of the link you gave me.

Which kind of timeline are we looking at for this new release,
is there a beta ?

I can't guarantee on the timeline, but should be fairly soon, we already work on the code for that.
I suggest you read the OpenTracing Inject and Extract API link I shared so you have an idea of how roughly it would work.

We have a similar requirement to propagate the trace ID across services, which includes JMS on our end. We did a modification of one of the plugins available (Spring RestTemplate) so that at the consumer side, we can read the value of a JMS custom header which is where we injected the original trace ID from the producer side.

It allows us to associate the trace from producer to consumer, but we still have problems on timing values, not sure if it is because we have not anticipated certain issues like the transaction being closed at the producer side. This results in spans which are more than 100% of a trace. Not ideal, but at least we can still see timing values. Also, our load test jumped by more than 10% as a result of using the modified agent.

We are not sure also if future updates would break our modifications so if there is a better way to propagate the trace ID or transaction id (maybe something like ElasticApm.setTransactionID() or similar) would be greatly appreciated.

Of course, native JMS support is still the preferred way for us, as we see that a similar Open APM initiative already has it on their agent, though they use a whole different stack. We still prefer to use Elastic as we already have engineering experience with it and are not too keen to introduce more tech stacks at the moment.

@digitalron what you did is basically the way to go, but with the new API you should get better results and lower dependency in implementation, so I suggest you switch to that once it is available. For example, a Transaction represents request handling on a specific process/JVM, so keeping the transaction and starting a span on the receiver side is not perfect. Our API will extract the trace context on the sender side and once used on the receiver side, we would start a new transaction while keeping the trace context. Still, things may look weird in terms of timings in the trace context- we'll see once we get that running and will address issues as we encounter them.

As for automatic JMS support - unfortunately, due to prioritizaion, not on the immediate roadmap, but we have that planned in the long run and getting your feedback is important in this regard.

Thank you for that information Eyal.

So is it correct to say that from a terminology point of view, transaction is a process within a process or JVM instance, while trace can cover multipole transaction?

We got used to AppDynamics which calls refers to an end-to-end as a "business transaction" even across services.

Regards,

Ronald

Ronald,

Yes, this is correct- in terms of the agent API terminology, a Transaction is recording a request-handling event within a service and can have zero or multiple child spans. A cross-service trace will yield multiple Transactions, all having the same trace ID.

I hope this makes sense.
Eyal.

2 Likes

To demonstrate my approach to various coders I am writing four simple programs:

  1. StartAPMTransaction.java (TransactionName)
  2. StartAPMSpan.java (TransactionName, SpanName)
  3. EndAPMSpan.java (TransactionName, SpanName)
  4. EndAPMTransaction.java (TransactionName)
    Each of these take the above commandline arguments as in input, which should allow me to demonstrate the APM functionality.
    What is the best "getting started" guide for implementing this with the opentracing bridge ?

I assume it would be best to start with the OpenTracing guide and then go to our bridge documentation.

As I was trying to find some opentracing sample to inject the context, came across this page:

https://www.programcreek.com/java-api-examples/?api=io.opentracing.SpanContext

Example 6 is a typical JMS implementation for injecting the context:

/**

  • Build span and inject. Should be used by producers.

  • @param message JMS message

  • @return span
    */
    public static Span buildAndInjectSpan(Destination destination, final Message message,
    Tracer tracer) {
    Tracer.SpanBuilder spanBuilder = tracer.buildSpan(TracingMessageUtils.OPERATION_NAME_SEND)
    .ignoreActiveSpan()
    .withTag(Tags.SPAN_KIND.getKey(), Tags.SPAN_KIND_PRODUCER);
    SpanContext parent = TracingMessageUtils.extract(message, tracer);

    if (parent != null) {
    spanBuilder.asChildOf(parent);
    }

    Span span = spanBuilder.start();

    SpanJmsDecorator.onRequest(destination, span);

    TracingMessageUtils.inject(span, message, tracer);
    return span;
    }

Is there any reason why this will fail to integrate ??

Regards

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.