I'm using ELK stack to ingest and visualize logs until now.
Now I'm trying add APM to monitorize applications too and I found and casuistic that I don't know if config able to handle.
I have 3 java services (spring). One of them is entry point to generate jobs through http POST requests. When this jobs got executed, generate so many kafka events (from 1k to 500k). Both of others applications (I gonna referenced as client application) listen this kafka topics and consume this events.
With few config, I configure APM and all works very well, including distributed tracing. But when I see transaction of client application, Kibana shows me this warning: Number of items in this trace exceed what is displayed, and I suspect that I'm losing any spans that I can't see. Because there isn't any error log but some transaction has all spans and other transactions hasn't it.
If I click and see parent transaction, I see the trace of http POST request of job generated by entry point application and show whole spans (kafka send events spans and processing of event on client application) until limit (by default 1000) spans is reached.
So, my question is, can I limit the distribute tracing from my entry point application and client application? My intention is that any kafka event to be a transaction in client application and not as span of parent transaction (http request on entry point application).
I haven't instrumented the code.
My agent config is:
service_version=<tag of version> service_name=<service-name> hostname=<host-name> environment=<environment> application_packages=<my.package> server_urls=<apm-server-url:9200> enable_log_correlation=true
If you need any more concise info, please tell me.
Thanks in advance.