Full APM trace with Kakfa documentation

If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text :slight_smile:

Kibana version:
v 7.5.2

Elasticsearch version:
7.5.2

APM Server version:
7.5.2

APM Agent language and version:
Java 1.14.0

Browser version:
Chrome

Original install method (e.g. download page, yum, deb, from source, etc.) and version:

Fresh install or upgraded from other version?

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

Steps to reproduce:
1.
2.
3.

Errors in browser console (if relevant):
N/A

Provide logs and/or server output (if relevant):
N/A

Is there documentation on how to setup APM tracing with kafka? I have setup the apm agent on two processes; one which grabs from a database using a hikari connection pool then using a kafka producer pushing to kafka, the second process uses a kafka consumer to push to elastic. All I see with this setup in the Kibana APM page are the kafka consumer events. I don't see the database, or pushing to elastic. I was hoping to see the timing of the full trace (db -> java process -> kafka -> java process -> elastic). Any documentation on this would be helpful.

my config:
export JAVA_OPTS={JAVA_OPTS}" -javaagent:/data/elastic-apm-agent.jar" export JAVA_OPTS={JAVA_OPTS}" -Delastic.apm.service_name=xxx-search"
export JAVA_OPTS={JAVA_OPTS}" -Delastic.apm.environment=test.xxx,test.xxx" export JAVA_OPTS={JAVA_OPTS}" -Delastic.apm.application_packages=com,org,xxx,java"
export JAVA_OPTS={JAVA_OPTS}" -Delastic.apm.server_urls=http://apm.elk.xxx.xxx:8200" export JAVA_OPTS={JAVA_OPTS}" -Delastic.apm.disable_instrumentations=mule"

Our agent will only trace events that occur within traced transactions. A transaction would be the entry event on a JVM process that tells the agent to trace other events within its execution. The Java agent will start a transaction if a supported technology was used, for example- Servlets, some scheduling frameworks and consumers of messaging frameworks. It seems your producer side is not using any of those, in which case you can use our public API in order to start and stop a transaction manually.

The above explains why you DO see the consumer side of things, however, it does not explain why you don't see the writes to Elasticsearch. Note that if the consumer reads the events from the topic and not directly sends to Elasticsearch on the same thread (eg puts on a queue to be read by another thread) - then the agent cannot correlate that out of the box.
If reading from the topic and sending to Elasticsearch is done on the same thread, please add this info:

  1. Which Elasticsearch client version are you using?
  2. Which Kafka clients version are you using?
  3. If you share your consumer code outlines, that may assist with analysis

Elastic java client:
7.2.0
Kafka client:
2.1.1

But, you answered my question. Yes, I have a batch queue which kafka consumer writes, then the my queue consumer commits after a successful push to Elastic. I'm using the Elastic Retry object to send (which is also multithreaded):

So, from your comments, it looks like I have to modify code to make this work? Or is there an apm agent consumer config way to do this (ie. like the hikari connection pool)?

Separate question that might work for me, I have dynatrace trace ids. Is there a way to use those with configuration?

Thank you

Yes, it seems you will have to do some manual changes using our public API to make it all work.

Take a look at our example of propagating context through blocking queues.

Do you mean you are using the BulkProcessor API?

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.