APM tracing from linkerd proxy through OpenCensus (jaeger format) errors

Kibana version:
Elasticsearch version:
APM Server version:
Original install method (e.g. download page, yum, deb, from source, etc.) and version:
Official docker images
Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.
Installed in k3s
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
I'm trying to use Elastic APM as backend for distibuted tracing. Using Jaeger agent works well. Now we are adding linkerd service mesh and i want to use APM for capturing trace information from linkerd proxies as well. The proxy use b3 trace format (openzipkin), so I use OpenCensus collector, which can convert traces to jaeger format and send to APM (instead of jaeger-collector). But when the traces start to come - i see none in Kibana. APM Indeices doesn`t change. And i'm geting these errors in APM logs:

Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x11f1fe8, ext:63732871282, loc:(*time.Location)(nil)}, Meta:{"pipeline":"apm"}, Fields:{"agent":{"name":"Jaeger","version":"unknown"},"ecs":{"version":"1.5.0"},"host":{"hostname":"web-566f6dc5c6-flqvh","name":"web-566f6dc5c6-flqvh"},"labels":{"app":"web-svc","direction":"inbound","http_host":"web-svc.emojivoto:80","http_path":"/api/vote?choice=:heart_eyes_cat:","http_status_code":"200","linkerd.io/control-plane-ns":"linkerd","linkerd.io/proxy-deployment":"web","linkerd.io/workload-ns":"emojivoto","pid":"1","pod-template-hash":"566f6dc5c6","start.time":"2020-08-12T10:27:13.087407704Z","version":"v10"},"observer":{"ephemeral_id":"706ebe68-4a10-4e0f-8a95-f7e47464353d","hostname":"apm-app-6688b4fd77-twgfv","id":"f069ce49-de57-4ae2-9cdb-9c007a30edb4","type":"apm-server","version":"7.8.1","version_major":7},"parent":{"id":"200e567092c4778d"},"processor":{"event":"span","name":"transaction"},"service":{"language":{"name":"unknown"},"name":"linkerd-proxy","node":{"name":"web-566f6dc5c6-flqvh"}},"span":{"duration":{"us":3712},"http":{"method":"GET","response":{"status_code":200}},"id":"3e037d93c97262db","name":"/api/vote?choice=:heart_eyes_cat:","subtype":"http","type":"external"},"timestamp":{"us":1597274482018817},"trace":{"id":"3199c3b23d1e756705d04cf44648b2ea"}}, Private:interface {}(nil), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"class_cast_exception","reason":"class org.elasticsearch.index.mapper.ScaledFloatFieldMapper cannot be cast to class org.elasticsearch.index.mapper.ObjectMapper (org.elasticsearch.index.mapper.ScaledFloatFieldMapper is in unnamed module of loader java.net.FactoryURLClassLoader @dbed7fd; org.elasticsearch.index.mapper.ObjectMapper is in unnamed module of loader 'app')"}}

When i switch to jaeger-collector instead of APM - it gets all the tracing correctly.

Here is my APM config:

apm-server.yml: |

    enabled: true
    host: "http://kibana-svc:5601"
    username: elastic
    password: sOmEpSwD
      enabled: true
      host: ""
    enabled: false
      enabled: true
      overwrite: true
  hosts: "http://elasticsearch-svc:9200"
  username: "${ES_USR}" 
  password: "${ES_PWD}"
    host: '${KIBANA_HOST}'
  level: info #warning
max_proc: 1
  events: 8000
  flush.min_events: 500

Steps to reproduce:

  1. Install Elasticsearch, Kibana and APM
  2. Use OpenCensus collector to send tracing to APM as Jaeger backend

Hi @hoboroten, and welcome to the forum!

The problem appears to be a field mapping collision with labels.app. I see in the event you have labels.app:"web-svc". I suppose something else has previously populated the "app" label with a number.

Can you check the field mapping for "labels.app" in the Console in Kibana, executing the following query?

GET /apm-*/_mapping/field/labels.app

What does it output? If it says that the field is scaled_float as I expect, then you can work around this issue by setting ignored_malformed for that field. Ideally there wouldn't be different types being assigned to the field in the first place, though.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.