Monitoring Undertow using Elastic APM java agent is not giving any transaction data

Kibana version: 7.7.0

Elasticsearch version: 7.7.0

APM Server version: apm-server-7.7.0-x86_64.rpm

APM Agent language and version: elastic-apm-agent-1.16.0.jar

Browser version: Chrome 84.0

Original install method (e.g. download page, yum, deb, from source, etc.) and version: RPM from download page

Fresh install or upgraded from other version? Fress Install

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.:
No, nothing. Installed agent on the server where ESB pod is running configured in undertow

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

We want to get the transaction data of the ESB Pods built on Undertow and in supported technologies we can see that it is supported. Still we are not able to receive the same.
However we can see error transaction and JVM in kibana but actual transactions are missing which is available in pod logs```

**Changes done in yaml file is as below**
     - name: JAVA_OPTS
          value: " -Xms500M -Xmx1024M -verbose:gc -Xloggc:/data/fuse/jboss/data/log/gc.log -XX:AdaptiveSizePolicyWeight=90 -XX:CICompilerCount=2 -XX:CompressedClassSpaceSize=260046848 -XX:GCLogFileSize=1M -XX:GCTimeRatio=4 -XX:MaxHeapFreeRatio=20 -XX:MaxMetaspaceSize=800M -XX:MetaspaceSize=400M -XX:MinHeapFreeRatio=10 -XX:NumberOfGCLogFiles=5 -XX:ParallelGCThreads=2 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:-TraceClassUnloading -XX:+UnlockExperimentalVMOptions -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseGCLogFileRotation -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+HeapDumpOnOutOfMemoryError -Delastic.apm.server_urls= -Delastic.apm.service_name=ESB_Test3_Account -Delastic.apm.hostname=ESB_Account -Delastic.apm.boot_delegation_packages=org.apache.karaf.jaas.boot,org.apache.karaf.jaas.boot.principal,,sun.*,com.sun.*,javax.transaction,javax.transaction.*,javax.xml.crypto,javax.xml.crypto.*,org.apache.xerces.jaxp.datatype,org.apache.xerces.stax,org.apache.xerces.parsers,org.apache.xerces.jaxp,org.apache.xerces.jaxp.validation,org.apache.xerces.dom,co.elastic.apm.agent.*  -javaagent:/home/devops/apm/elastic-apm-agent-1.16.0.jar"

**After restarting the pod I can see below logs**

root@ose-master01 deployments-optima]# oc logs esb-account-7cf9d855bc-zfptq
2020-07-23 18:08:32.755 [elastic-apm-server-healthcheck] INFO - Elastic APM server is available: {  "build_date": "2020-05-12T00:04:54Z",  "build_sha": "64e91c95329991c36b16ff94fd34ea75230c06c2",  "version": "7.7.0"}
2020-07-23 18:08:32.786 [main] INFO co.elastic.apm.agent.util.JmxUtils - Found JVM-specific OperatingSystemMXBean interface:
2020-07-23 18:08:32.917 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - Starting Elastic APM 1.16.0 as ESB_Test3_Account on Java 1.8.0_65 (Oracle Corporation) Linux 3.10.0-1062.18.1.el7.x86_64
2020-07-23 18:08:32.917 [main] WARN co.elastic.apm.agent.configuration.StartupInfo - To enable all features and to increase startup times, please configure application_packages
2020-07-23 18:08:32.928 [elastic-apm-remote-config-poller] INFO co.elastic.apm.agent.configuration.ApmServerConfigurationSource - Received new configuration from APM Server: {}
2020-07-23 18:08:32.929 [main] INFO co.elastic.apm.agent.impl.ElasticApmTracer - Tracer switched to RUNNING state
Red Hat Fuse starting up. Press Enter to open the shell now...
 86% [=============================================================>          ]2020-07-23 18:08:47.521 [CM Configuration Updater (ManagedService Update: pid=[org.apache.cxf.osgi])] INFO co.elastic.apm.agent.servlet.ServletVersionInstrumentation - Servlet container info = Undertow - 2.0.20.Final-redhat-00001
2020-07-23 18:08:47.951 [paxweb-config-2-thread-1] INFO co.elastic.apm.agent.bci.bytebuddy.CustomElementMatchers - Cannot read implementation version based on ProtectionDomain. This should not affect your agent's functionality. Failed with message: For input string: "fuse"
100% [========================================================================]

Karaf started in 46s. Bundle stats: 397 active, 397 total

**Pod is on below Java version**
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)

Now when we are doing some transaction, we can see them in pod logs but in Kibana we are not able to see anything in Transaction Tab. 
Please assist what more changes we need to make for Undertow 2.0.20 monitoring.

Along with this I want to let you know that we are using
Server : REDHAT Jboss FUSE 7.4
Engine : Undertow

So, I want to know if Elastic 7.7, Kibana 7.7 supports automatic detection of transaction traces via APM agent for Jboss FUSE 7.4? Because for Jboss EAP we are able to get all the data.

Hi again @khgupta3 :wave:
In your former topic you reported something that seemed like an OSGi issue. Please try the latest agent version and see if your logs are now free from such errors. This version contains some work done to eliminate such issues.

Other than that, as answered before, there is no dedicated support for Undertow or Fuse, so you would have to manually trace your application using our public API or rely on our OpenTracing integration, which must come with a relevant OpenTracing library (maybe this can help).

Sorry we cannot help further at this point, I hope this helps at least to some degree.

Do we plan to support fuse in near future?

I'm afraid there are no such plans at the moment

I saw some posts where FUSE and Karaf was giving Transaction data. We are on the verge to buy a license but this ESB on RedHat Jboss Fuse server, Karaf runtime and Undertow Engine is very crucial for us to monitor.
We checked with Redhat team too and got this reply "Unfortunately it seems that Fuse on Karaf is not part of the supported platforms so it's more a matter of Elastic than Red Hat."
But yesterday we saw 1-2 transaction in that pod, not sure though from where it got picked. You are expert and can help to achieve the same.

Are we missing any config as few transaction came yesterday. Or for some transaction logic behind is different?
Can we use Jaeger agent with ELK to support this?

Unfortunately it is not supported out of the box, but, as said above, you can manually trace your application using our public API or OpenTracing bridge.

What is the transaction.type of these two transactions? If it's messaging then it come from our JMS support.

Yes, we now have such integration in place (still experimental).

Thanks Eyal. We didn't know that for different types of transaction we have different logic behind it. We will check for Jaegar for sure.

Can you please elaborate a little more how JMS support picks transaction for Messaging type. It means that won't pick from Jboss FUSE but from different module?

Since our HTTP transactions rely on the Servlet API, the agent would not create HTTP transactions on Fuse. However, if your system uses JMS, for example with Active MQ, then the agent can create transactions for message-handling code.

We got it Eyal, Thanks for clearing this out.
Now without changing the code we want to achieve this and working on a workaround how we can do. Recently found out that one more agent called "naver/pinpoint" is able to pick all Jboss Fuse transaction out of box using below configuration.

  • name: JAVA_OPTS
    value:" -Xms500M -Xmx1024M -verbose:gc -Xloggc:/data/fuse/jboss/data/log/gc.log -Dorg.osgi.framework.bootdelegation=com.navercorp.pinpoint.,org.osgi.framework.,sun.,com.sun.,org.apache.xerces.,sun.misc.,org.osgi.jmx.,org.springframework.osgi.,org.apache.felix.,org.osgi.service.blueprint.container.,org.apache.felix.framework.,org.apache.camel.,org.apache.aries.* -Djboss.modules.system.pkgs=org.jboss.byteman,org.jboss.logmanager,com.navercorp.pinpoint.bootstrap,com.navercorp.pinpoint.common,com.navercorp.pinpoint.exception -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Xbootclasspath/p:/home/devops/NFT/pinpointagent2/jboss-logmanager-2.0.7.Final-redhat-1.jar -javaagent:/home/devops/NFT/pinpointagent2/pinpoint-bootstrap-2.0.0.jar -Dpinpoint.container -Dpinpoint.applicationName=ESB_Account -Dpinpoint.agentId=ESB_Account"

Here in -Xbootclasspath there is one more jar being used which might is picking up the transaction data. By doing any config change in agent or somewhere, can we achieve this without doing any changes in code?

We are looking forward to use Elastic as our single tool for tracing everything. So, if your team can help it will be really appreciated and this show stopper will be no more.

This config only adds a jar to the boot classpath search list.

The Pinpoint agent may be picking these transactions based on its Undertow support, which we currently don't have. Unfortunately, this is not on our immediate roadmap.

Finally we tried with Opentracing and did the below change in JAVA_OPTS

-Xbootclasspath/p:/home/devops/apm/apm-opentracing-1.16.0.jar -Delastic.apm.server_urls= -Delastic.apm.service_name=ESB_Merged -Delastic.apm.hostname=ESB_merged1 -Delastic.apm.log_level=DEBUG -Delastic.apm.log_file=/home/devops/apm/log/elastic-apm-esb-account-server.log -Delastic.apm.enable_log_correlation=true -javaagent:/home/devops/apm/elastic-apm-agent-1.16.0.jar

Here we use opentracing jar and apm agent jar. But still not able to see any transaction logs.

Also, in Elastic open tracing document, it is mentioned to add dependency in pom.xml but nowhere mention where to define APM Server IP where transaction will be send. How that dependency addition will know where to send the traces.

Where to define that opentracing jar file if the one I have given is not correct.

No need for that, should be removed.

Elastic Java agent integration with OpenTracing includes three components:

  1. The agent itself, which is installed/attached and configured (including server_urls) just like you would use it standalone. The agent is responsible for the tracing heavy lifting - it collects metrics, configurations, event etc. and communicates with the APM server. All it misses in your case is a specific instrumentation for Undertow or Fuse, which is why we use OpenTracing.
  2. An OpenTracing library that fills the gap of instrumenting what the agent doesn't and creating events accordingly. The events are created through the OpenTracing API if you provide an implementation for the API (see #3). For this, you can try out the camel-opentracing component (notice the setTracer API that allows you to set a specific tracer implementation, the Elastic tracer from the bridge in this case).
  3. The OpenTracing bridge jar. Note that this jar only bridges between the OpenTracing library and the Elastic Java agent. It is implementing the OpenTracing API, but it is doing nothing if the agent is not attached as well. You need to add this jar to your application dependencies, alongside the library (from #2) and you need to initialize the tracer, as documented.

I hope this helps.

Thanks Eyal.

It really made things little clear. But while implementing, we got an issue and want to know if just giving camel-tracing dependency, Elastic open-tracing dependency in POM.xml and initializing the Tracer in Java file will work? How APM agent will pick up the data by just initializing the Tracer?

Step1: In pom.xml we have below for camel-tracing

io.opentracing.contrib opentracing-specialagent

Step2: Giving Elastic opentracing dependecy

co.elastic.apm apm-opentracing ${elastic-apm.version}

Step 3: Initializing Tracer

import co.elastic.apm.opentracing.ElasticApmTracer;
import io.opentracing.Tracer;

Tracer tracer = new ElasticApmTracer();

So, will above 3 steps has to work with APM Agent (already defined JAVA_OPTS in deployment yaml as we did in standalone). We are not getting traces and I think somewhere we need to define which can send transaction data to Agent. Not sure though how it will work and had followed your document.

In addition to initializing the Elastic tracer, did you set it to the camel OT tracer?

Never tried it, but I believe should look something like this:

Tracer elasticTracer = new ElasticApmTracer();
OpenTracingTracer camelTracer = new OpenTracingTracer();
// By default it uses a Noop Tracer, but you can override it with a specific OpenTracing implementation.
// And then initialize the context

2 Tracers? I guess it should be only one. If we are using Elastic APMTracer, why we have to use camelTracer.
From this blog I can see it is mentioned to replace the existing Tracer like Jaeger example.

Just thing was how APM agent will read just by initializing and also in camel tracing there is no linking with Elastic and camel. How camel will send to Elastic.

I am defining 2 opentracing agents too. camel-opentracing and apm-opentracing agent. Do we require both? their interconnection?

Getting below error
Error executing command: Unresolved constraint in bundle bil-account [399]: Unable to resolve 399.1: missing requirement [399.1] osgi.wiring.package; (osgi.wiring.package=co.elastic.apm.opentracing)

Also, we found that it may be because 2 folders got created

apm-agent-parent (which was not having any jar)
apm-opentracing (which has jar as we have defined its dependency)

So, for apm-agent-parent which dependency is required. It is not mentioned in docuemnt.

Two tracers and both are noop :slight_smile:.

Again, only from briefly reading about it, that's what I think: the camel opentracing lib is responsible for identifying interesting camel events and invoke OpenTracing APIs. It invokes those through the camel tracer (not implementing io.opentracing.Tracer), which delegates to an io.opentracing.Tracer instance.

If you don't set an OT tracer, camel will set a noop one and you won't get anything. So you need to set the Elastic OT tracer, then the camel tracer will delegate to it. This is the link. This is also why you need both dependencies.
The Elastic OT tracer is also noop, so this alone won't be useful. However, if you also attach the agent to the JVM (not as a dependency, but as a javaagent), the the agent will instrument the Elastic OT bridge classes and translate all these OT API calls into Elastic spans that the agent will send to APM server.

Neither is agent, they are both libraries your app should depend on.

No dependency on apm-agent-parent, only the one dependency suggested in the documentation.

Yes Eyal. I did exactly the same I guess. Also in document of camel it is written
Include the camel-opentracing component in your POM, along with any specific dependencies associated with the chosen OpenTracing compliant Tracer. So, we did added below

By adding these dependency I don’t have any issue in term of build and deployment.


But as soon as I below line of code in Java class I got reported error.

import co.elastic.apm.opentracing.ElasticApmTracer;
import io.opentracing.Tracer;
Tracer tracer = new ElasticApmTracer();

Error executing command: Unresolved constraint in bundle bil-account [407]: Unable to resolve 407.3: missing requirement [407.3] osgi.wiring.package; (osgi.wiring.package=co.elastic.apm.opentracing)

This initialization we did in java class constructor but not working. This was also there in camel page you shared ( The Tracer used will be implicitly loaded from the camel context Registry or using the ServiceLoader .) .

Am I missing anything now?

Try to remove the <scope>provided</scope> part and rebuild.

By doing that, dependency build was not working, so we searched and gave this scope. After that build worked, but when we initialize the ElasticTracer then we got compilation error of java constructor. Below error

Error executing command: Unresolved constraint in bundle bil-account [407]: Unable to resolve 407.3: missing requirement [407.3] osgi.wiring.package; (osgi.wiring.package=co.elastic.apm.opentracing)

Is there any dependency I am missing? Or something wrong?

Can we have exact code which needs to be define and where. We want to test opentracing to make Jboss fuse working.

What Elastic looks for actually? Is there any format which opentracing provide you. Just asking to see if we can find any other way to achieve this or help us with this opentracing solution, how exactly will it work. We tried but getting errors.