I'm looking for some guidance on implementing EDOT (Elastic Distributions of Open Telemetry) in a multi-tenant Kubernetes environment. I've run into some technical challenges and would appreciate insights from anyone who's tackled similar problems.
Current Working Setup
We currently have a multi-tenancy implementation for APM data:
We use an ingress pipeline with a reroute processor
For APM data, we use <type>-apm.app@custom ingest pipeline
The pipeline logic maintains the original dataset field
It sets the data stream namespace using tenant_id from incoming labels
The reroute processor then directs data to the appropriate namespace and dataset
This approach works perfectly for APM data - each tenant's telemetry is properly isolated in its own namespace.
EDOT Implementation Challenge
Following the guidance in the (blog post), we've attempted to implement EDOT using the elastic exporter.
The key issue we're facing is that EDOT, when using the elastic exporter, sends telemetry data directly to Elasticsearch instead of going through APM like our other data flows. This direct path bypasses our multi-tenancy setup that relies on the APM ingress pipeline's reroute processor.
We've tried:
Modifying logs@custom/metrics@custom pipelines to mirror our APM pipeline configuration
Identifying equivalent fields to tenant_id in OTEL data for routing
None of these approaches have successfully maintained our tenant isolation for EDOT data.
Specific Questions
Has anyone successfully configured EDOT to route through APM instead of directly to Elasticsearch?
What's the recommended approach for implementing namespace-based routing for OTEL telemetry data when using the elastic exporter?
Are there specific configuration parameters in the elastic exporter that would allow for routing to specific elastic namespace?
I have not tried EDOT yet, so this might not be helpful.
With both Golang and .NET I fully tested using straight OTEL to APM and have had it working as expected.
Looking at the notes on EDOT for python. I would use the OTLP exporter and target the apm endpoint like you normally would.
The second thought I have is that the field names are off with how it is being ingested and I would check that.
Otherwise if you need to stick with the elasticsearch exporter, it looks like you might be able to set a target pipeline. The version of Elastic you are on might also be causing problems if it isn't fairly recent. For our use case switching from 8.14 to 8.16 saw serval fixes to how elastic itself processed the incoming otel data.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.