Failed to parse field [labels.name] of type [scaled_float]

Kibana version:
7.3.0
Elasticsearch version:
7.3.0
APM Server version:
7.3.0
APM Agent language and version:
1.8.0 Java Agent

Original install method (e.g. download page, yum, deb, from source, etc.) and version:
elastic.co helm charts in self hosted k8s cluster

Fresh install or upgraded from other version?
fresh

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.
No. Straight forward setup. apm talking to elasticsearch, kibana reading from elasticsearch. Additionally but unrelated, filebeat sending to ES.

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
I try to instrument one of our JEE apps. The apm agent is up and running, and configured like that:
"-javaagent:MY_APP_HOME/lib/elastic-apm-agent-" + DependencyVersion.ELASTIC_APM_VERSION.version + ".jar",
"-Delastic.apm.service_name=iotcore-MY_STAGE-MY_REGION-MY_SERVICENAME",
"-Delastic.apm.application_packages=com.myapp.io",
"-Delastic.apm.server_urls=https://apm-server.example.net",
"-Delastic.apm.environment=MY_STAGE",

The code itself is annotated like that:
import co.elastic.apm.api.CaptureTransaction;
and in the class a @CaptureTransaction.

The service comes up and looks good, but on the APM server side, I get these exceptions:
WARN elasticsearch/client.go:535 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x2c0c3ec0, ext:63702599350, loc:(*time.Location)(nil)}, Meta:common.MapStr{"pipeline":"apm"}, Fields:common.MapStr{"agent":common.MapStr{"ephemeral_id":"dbe578c2-2ad3-453d-ba88-1abec74a33f7", "name":"java", "version":"1.8.0"}, "container":common.MapStr{"id":"9defbc7c02e976329ed66a7ee2226e5703765087c9906ada3ce060f70c447fdf"}, "ecs":common.MapStr{"version":"1.0.1"}, "host":common.MapStr{"architecture":"amd64", "hostname":"9defbc7c02e9", "ip":"100.103.216.192", "os":common.MapStr{"platform":"Linux"}}, "jvm":common.MapStr{"gc":common.MapStr{"count":355, "time":5698}}, "labels":common.MapStr{"name":"G1 Young Generation"}, "observer":common.MapStr{"ephemeral_id":"5b9e5e3b-80b5-4655-9327-c8fe420f80a7", "hostname":"apm-server-8778fd987-xtrdv", "id":"b7868a5f-ccef-4326-a2db-e44a45603d63", "type":"apm-server", "version":"7.3.0", "version_major":7}, "process":common.MapStr{"pid":1, "title":"/opt/jdk1.8.0_202/jre/bin/java"}, "processor":common.MapStr{"event":"metric", "name":"metric"}, "service":common.MapStr{"environment":"stage", "language":common.MapStr{"name":"Java", "version":"1.8.0_202"}, "name":"myapp", "runtime":common.MapStr{"name":"Java", "version":"1.8.0_202"}}}, Private:interface {}(nil), TimeSeries:false}, Flags:0x1} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [labels.name] of type [scaled_float] in document with id 'UYWh2GwBIt8udj3vizmd'. Preview of field's value: 'G1 Young Generation'","caused_by":{"type":"number_format_exception","reason":"For input string: \"G1 Young Generation\""}}

I'm a bit puzzled where that comes from. What would a CaptureTransaction have to do with some GC statistics?
Also, I have a second service which is Spring based and I don't see those kind of errors there.

Any help/pointers are appreciated :slight_smile:

Hi and welcome to the forum :wave:

Seems like labels.name is already mapped to scaled_float. One of the reasons may be that you are using the public API to add a number label, something like transaction.addLabel("name", 42);.

When the agent sends GC metrics, it also sends a label.name where the value is the name of the memory manager, as a String.

To find out which documents already contain a labels.name, open Kibana, go to the discover tab and search for labels.name: *.

Hello There,

thanks for your swift answer. Unfortunately I had only time today to further investigate what we are doing.
First of all, I can't find anything in Discover about labels.name. Which is odd...
Second I looked at our code. And the only thing we do is @CaptureTransaction and @CaptureSpan. We don't set any transaction.addLabel or the like.
I don't know whether this is important, but the code we are instrumenting is not a service, but just a worker consuming messages from a stream.
Any other hints where this might come from?

I also had a look at the actual index Mapping for the index apm-7.3.0-transactions:
"dynamic_templates": [
{
"labels": {
"path_match": "labels.*",
"mapping": {
"scaling_factor": 1000000,
"type": "scaled_float"
}
}
},

Well, but this index mapping is created by apm-server itself, isn't it?

Hi there!

By default labels has several mappings, and all apply. Do you find in your template something like this?:

      {
        "labels": {
          "mapping": {
            "type": "keyword"
          },
          "match_mapping_type": "string",
          "path_match": "labels.*"
        }
      },
      {
        "labels": {
          "mapping": {
            "type": "boolean"
          },
          "match_mapping_type": "boolean",
          "path_match": "labels.*"
        }
      },
      {
        "labels": {
          "mapping": {
            "scaling_factor": 1000000,
            "type": "scaled_float"
          },
          "match_mapping_type": "*",
          "path_match": "labels.*"
        }
      },

You can also see if you are actually using the template generated by the apm-server by running apm-server export template > template.json and comparing that with the one you have in Elasticsearch.

Good hint in comparing the template from apm-server vs the one in elasticsearch.
If I read this correctly, in ES it's not the same as the exported from apm-server...

apm-server:

{
  "index_patterns": [
    "apm-7.3.0*"
  ],
  "mappings": {
    "_meta": {
      "beat": "apm",
      "version": "7.3.0"
    },
    "_source": {
      "enabled": true
    },
    "date_detection": false,
    "dynamic_templates": [
      {
        "labels": {
          "mapping": {
            "type": "keyword"
          },
          "match_mapping_type": "string",
          "path_match": "labels.*"
        }
      },
      {
        "container.labels": {
          "mapping": {
            "type": "keyword"
          },
          "match_mapping_type": "string",
          "path_match": "container.labels.*"
        }
      },

And elasticsearch:

{
  "apm-7.3.0-transaction-000002": {
    "mappings": {
      "_meta": {
        "beat": "apm",
        "version": "7.3.0"
      },
      "dynamic_templates": [
        {
          "labels": {
            "path_match": "labels.*",
            "mapping": {
              "scaling_factor": 1000000,
              "type": "scaled_float"
            }
          }
        },
        {
          "container.labels": {
            "path_match": "container.labels.*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        },

Further down in apm-server exported template there are more label settings:

    52         "labels": {
    53           "mapping": {
    54             "type": "keyword"
    55           },
    56           "match_mapping_type": "string",
    57           "path_match": "labels.*"
    58         }
    59       },
    60       {
    61         "labels": {
    62           "mapping": {
    63             "type": "boolean"
    64           },
    65           "match_mapping_type": "boolean",
    66           "path_match": "labels.*"
    67         }
    68       },
    69       {
    70         "labels": {
    71           "mapping": {
    72             "scaling_factor": 1000000,
    73             "type": "scaled_float"
    74           },
    75           "match_mapping_type": "*",
    76           "path_match": "labels.*"
    77         }
    78       },

hm... not sure what's going on here. any more help of course highly appreciated :slight_smile:
thanks in advance!

Ok, that explains the issue. Unfortunately is not so difficult to end up with a wrong template, so I couldn't say how it happened. Maybe setup.template.enabled was false, maybe someone changed it, maybe some issue with upgrading/downgrading versions, maybe because of Kafka or Logstash output (if you are using one of those)

To fix it you should load the bundled template. If you are using Elasticsearch output, you can do it with apm-server -e setup template, otherwise https://www.elastic.co/guide/en/apm/server/7.3/_manually_loading_template_configuration.html provides a good explanation.

Keep in mind that changes will be effective for future indices only, not the existing ones.

Hope that helps!

Hello,

thanks for your reply and interesting explanation.
My apm-server setup is very basic. Deployed via helm chart like that:

kind: Deployment
replicaCount: 2

service:
  enabled: true

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: traefik

config:
  apm-server:
    host: 0.0.0.0:8200
    rum:
      enabled: true
  output.file:
    enabled: false

resources:
  requests:
    cpu: 80m
    memory: 128Mi
  limits:
    cpu: 150m
    memory: 256Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 5
  targetCPUUtilizationPercentage: 80

I went ahead and did apm-server -e setup template. I will check in kibana how often the apm-* indices get rotated.
Thanks so far :slight_smile:

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.