Dec 22th, 2022: [EN] How to run the official OpenTelemetry Demo with Elastic

During the past 2 years, OpenTelemetry has emerged as the new open source observability standard and has become the 2nd most popular project in CNCF.

While more and more companies are adopting OpenTelemetry for their observability use cases, a reference Demo application for OpenTelemetry has become a necessity for evaluation, PoC, validation purposes.

The good news is that the OpenTelemetry community has recently announced the official OpenTelemetry Demo became General Available. This demo showcases an end-to-end distributed system instrumented with 100% OpenTelemetry Traces and Metrics. This demo not only provides a robust sample application for developers to use in learning OpenTelemetry, it also demonstrates the capabilities of OpenTelemetry in terms of Auto/Manual instrumentation, custom span, context propagation and so on. From the very beginning, the OpenTelemetry Demo was designed to adopt the “bring your own backend” concept, this allows developers to test how to easily send OpenTelemetry data to Observability backends and to explore the leverage the best of the Observability backends to explore the data.

In this post, you will see how easy it is to set up the official OpenTelemetry Demo to work with Elastic, and how to explore OpenTelemetry traces, metrics and logs with Elastic.

Setup the Official OpenTelemetry demo application with Elastic

Elastic OpenTelemetry integrations allow you to reuse your existing OpenTelemetry instrumentation to quickly analyze distributed traces and metrics to help you monitor business KPIs and technical components with the Elastic Stack.

Elastic provides OpenTelemetry integration since version 7.7(released in July 2020) and Elastic APM server natively supports the OpenTelemetry protocol, providing official Elastic OpenTelemetry exporters for traces, metrics and logs.

Here are the steps to set up the demo with Elastic

Step 1

git clone https://github.com/open-telemetry/opentelemetry-demo.git  
cd opentelemetry-demo/

Step 2
Ensure you have an account on Elastic Cloud and a deployed stack (see instructions here).

Step 3
Follow Elastic APM Quick start guide to configure the APM integration, then from Kibana go to Integration → APM → OpenTelemetry, make sure to note the values in OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_HEADERS which are required to configure the OpenTelemetry Collector in the next step.

Step 4
Go back to the OpenTelemetry Demo project that you previously cloned, vendor specific settings can be written in src/otelcollector/otelcol-config-extras.yml.

Copy the follow snippet to the file, and adjust the token and endpoint values with those you noted in the previous step: OTEL_EXPORTER_HEADERS (just use the token value without Authentication Header) and OTEL_EXPORTER_OTLP_ENDPOINT (without https)

extensions:
 bearertokenauth/client:
   token: "YOUR_TOKEN"
 
receivers:
 otlp:
   protocols:
     grpc:
     http:
 hostmetrics:
   collection_interval: 10s
   scrapers:
     load:
     memory:
     network:
 
exporters:
 otlp/elastic:
   endpoint: "YOUR_ENDPOINT"
   auth:
     authenticator: bearertokenauth/client
 
processors:
 resource:
   attributes:
     - key: deployment.environment
       action: insert
       value: production
 
service:
 extensions: [bearertokenauth/client]
 pipelines:
   traces:
     receivers: [otlp]
     processors: [resource]
     exporters: [otlp/elastic]
   metrics:
     receivers: [otlp,hostmetrics]
     processors: [resource]
     exporters: [otlp/elastic]
   logs:
     receivers: [otlp]
     processors: [resource]
     exporters: [otlp/elastic]

As you can see in the code snippet above, we use a resource processor to add deployment.environment to the OpenTelemetry data (this will help us to configure the service visibility with Kibana spaces). We also export traces, metrics and logs using the otlp/elastic exporter.

Step 5
The official OpenTelemetry Demo provides two deployment options, Docker or Kubernetes. The easier option of the two is Docker, you can simply start the demo using the following command

docker compose up --no-build

Once the containers are started, you can verify the webstore at http://localhost:8080

Explore the OpenTelemetry demo application data with Elastic

Traces, Metrics & Logs

Navigate to Kibana’s Observability App, from APM service Tab, you can see that the instrumented services of the OpenTelemetry Demo appear correctly.

The deployment.environment attribute “production” that we added previously in the processor shows up correctly in the Environment column. Starting in version Elastic 8.2.0, the APM app is Kibana Space aware. This allows you to separate your data service environment for example, following this documentation allows you to set up a dedicated Kibana space for this OpenTelemetry Demo app.

The service map view gives you the visual representation of the instrumented services and their dependencies

From the service map, you can get into the details of the checkoutservice for example which will bring you to the timeline visualization for the distributed traces.

Switching to the metadata tab, you can notice Elastic’s native capabilities of mapping custom span attributes of OpenTelemetry

Although most of the OpenTelemetry Logs implementations are still experimental at this stage, switching to the Logs tab allows you to see how Elastic helps you in terms of traces and logs correlation.

While Elastic is still maturing the native integration of OpenTelemetry metrics and Elastic Observability App. You can already make use of Elastic’s native dashboarding capabilities for the OpenTelemetry metrics.

In order to do so, create a Data view with metric-apm.app-*-* index pattern, and you can build any kind of visualizations for the Otel metrics with Kibana Lens, here is an example on building a lens visualization for CPU utilization of the recommendation service.

Feature flag

One of the most interesting parts of the official OpenTelemetry Demo is that it comes with several feature flags which can control failure conditions in specific services. There are two scenarios of the feature flags that can be enabled to simulate Product Catalog Failure and memory leak of a specific service.

The idea of testing such scenarios is to validate how the Observability vendor can help to reduce the mean time to detection and to find the root causes easily.

To turn on the feature flags, simply use the Feature Flags UI http://localhost:8080/feature you will be able to control the status of these feature flags.

Once you enable both feature flags, you can see the failed transaction rates increase.

You can then go to the service details of frontend service and check out the HTTP GET transactions.

From the failed transaction correlation tab, Elastic APM brings to light the most impactful metadata for failed transactions, here the failed product id OLJCESPC7Z can be immediately identified.

For the second feature flag scenario, you can leverage latency correlation tab to easily identify the root cause of the latency illustrating the impacting service from recommendations.

What’s next

If you’d like to explore more on OpenTelemetry with Elastic, I’d encourage you to take a look at Elastic CI/CD observability which relies mainly on OpenTelemetry. You can also try out how to instrument AWS Lambda with OpenTelemetry and monitor them with Elastic.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.