Failed to transform sample model.Sample

Kibana version: 7.9.2

Elasticsearch version: 7.9.2

APM Server version: 7.9.2

APM Agent language and version: Java 1.18.0

Browser version: N/A

Original install method (e.g. download page, yum, deb, from source, etc.) and version: deb

Fresh install or upgraded from other version? upgrade

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc. Nope

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
After a recent upgrade (possibly 7.9.0) apm server logs are full of sample transform errors.

Steps to reproduce:

  1. Setup APM agent in a server using HikariCP (3.4.5)

Provide logs and/or server output (if relevant):

Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.idle", Value:2, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.creation.count", Value:600, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.creation.sum.us", Value:7.0216e+07, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.timeout", Value:6, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.pending", Value:0, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.creation.sum.us", Value:5.7659e+07, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.active", Value:0, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.idle", Value:2, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.creation.count", Value:511, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.usage.count", Value:128944, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.usage.sum.us", Value:1.983578e+09, Values:[]float64(nil), Counts:[]int64(nil)}
Sep 25 10:23:09 apm-server[31357]: WARN [transform] model/metricset.go:141 failed to transform sample model.Sample{Name:"hikaricp.connections.max", Value:50, Values:[]float64(nil), Counts:[]int64(nil)}
1 Like

Hi @OrangeDog,
are you using micrometer? If so, this might be related to a bug recently found in the Java agent. There is a PR up for fixing it, you can follow along at apm-agent-java/pull/1419

Micrometer is there, via Spring Boot.
This doesn't look like that issue though.

Note the log has errors for both finite float64s and int64s.

Hi @OrangeDog,

  • do you have any error/warning on the agent-side ?
  • what are the versions of micrometer / spring-boot / other that you are using ?

The only logs I see from the agent look like this:

2020-09-28 00:39:52,873 [XNIO-1 task-14] INFO co.elastic.apm.agent.cache.WeakKeySoftValueLoadingCache - The value for the key org.springframework.boot.loader.LaunchedURLClassLoader@5910e440 has been collected, reloading it

It's Spring Boot 2.3.3 (so Micrometer 1.5.4).

There would be a fast way to help debug this, could you try to set the java agent log_level=trace and provide us the lines that contain metricset and hikaricp?

Something in the lines of

cat agent.log | grep metricset | grep hikaricp

This will make the log output very verbose as it logs everything that the agents send to server.

Here are some

Sep 28 16:34:03 env[3405743]: {"metricset":{"timestamp":1601310843073000,"tags":{"pool":"HikariPool-2"},"samples":{"hikaricp.connections.max":{"value":50.0},"hikaricp.connections.usage.count":{"value":46},"hikaricp.connections.usage.sum.us":{"value":409000.0},"hikaricp.connections":{"value":3.0},"hikaricp.connections.active":{"value":0.0},"hikaricp.connections.min":{"value":2.0},"hikaricp.connections.acquire.count":{"value":46},"hikaricp.connections.acquire.sum.us":{"value":159244.109},"hikaricp.connections.timeout":{"value":0.0},"hikaricp.connections.pending":{"value":0.0},"hikaricp.connections.idle":{"value":3.0},"hikaricp.connections.creation.count":{"value":0},"hikaricp.connections.creation.sum.us":{"value":0.0}}}}
Sep 28 16:34:03 env[3405743]: {"metricset":{"timestamp":1601310843073000,"tags":{"pool":"HikariPool-1"},"samples":{"hikaricp.connections.max":{"value":50.0},"hikaricp.connections.usage.count":{"value":50},"hikaricp.connections.usage.sum.us":{"value":889000.0},"hikaricp.connections.acquire.count":{"value":50},"hikaricp.connections.acquire.sum.us":{"value":217912.23},"hikaricp.connections.min":{"value":2.0},"hikaricp.connections":{"value":3.0},"hikaricp.connections.timeout":{"value":0.0},"hikaricp.connections.pending":{"value":0.0},"hikaricp.connections.active":{"value":0.0},"hikaricp.connections.idle":{"value":3.0},"hikaricp.connections.creation.count":{"value":0},"hikaricp.connections.creation.sum.us":{"value":0.0}}}}
Sep 28 16:34:33 env[3405743]: {"metricset":{"timestamp":1601310873073000,"tags":{"pool":"HikariPool-2"},"samples":{"hikaricp.connections.max":{"value":50.0},"hikaricp.connections.usage.count":{"value":60},"hikaricp.connections.usage.sum.us":{"value":416000.0},"hikaricp.connections":{"value":3.0},"hikaricp.connections.active":{"value":0.0},"hikaricp.connections.min":{"value":2.0},"hikaricp.connections.acquire.count":{"value":60},"hikaricp.connections.acquire.sum.us":{"value":172080.99},"hikaricp.connections.timeout":{"value":0.0},"hikaricp.connections.pending":{"value":0.0},"hikaricp.connections.idle":{"value":3.0},"hikaricp.connections.creation.count":{"value":0},"hikaricp.connections.creation.sum.us":{"value":0.0}}}}
Sep 28 16:34:33 env[3405743]: {"metricset":{"timestamp":1601310873073000,"tags":{"pool":"HikariPool-1"},"samples":{"hikaricp.connections.max":{"value":50.0},"hikaricp.connections.usage.count":{"value":64},"hikaricp.connections.usage.sum.us":{"value":896000.0},"hikaricp.connections.acquire.count":{"value":64},"hikaricp.connections.acquire.sum.us":{"value":243931.7},"hikaricp.connections.min":{"value":2.0},"hikaricp.connections":{"value":3.0},"hikaricp.connections.timeout":{"value":0.0},"hikaricp.connections.pending":{"value":0.0},"hikaricp.connections.active":{"value":0.0},"hikaricp.connections.idle":{"value":3.0},"hikaricp.connections.creation.count":{"value":0},"hikaricp.connections.creation.sum.us":{"value":0.0}}}}
Sep 28 16:35:03 env[3405743]: {"metricset":{"timestamp":1601310903073000,"tags":{"pool":"HikariPool-2"},"samples":{"hikaricp.connections.max":{"value":50.0},"hikaricp.connections.usage.count":{"value":74},"hikaricp.connections.usage.sum.us":{"value":423000.0},"hikaricp.connections":{"value":2.0},"hikaricp.connections.active":{"value":0.0},"hikaricp.connections.min":{"value":2.0},"hikaricp.connections.acquire.count":{"value":74},"hikaricp.connections.acquire.sum.us":{"value":184358.326},"hikaricp.connections.timeout":{"value":0.0},"hikaricp.connections.pending":{"value":0.0},"hikaricp.connections.idle":{"value":2.0},"hikaricp.connections.creation.count":{"value":0},"hikaricp.connections.creation.sum.us":{"value":0.0}}}}
Sep 28 16:35:03 env[3405743]: {"metricset":{"timestamp":1601310903073000,"tags":{"pool":"HikariPool-1"},"samples":{"hikaricp.connections.max":{"value":50.0},"hikaricp.connections.usage.count":{"value":78},"hikaricp.connections.usage.sum.us":{"value":903000.0},"hikaricp.connections.acquire.count":{"value":78},"hikaricp.connections.acquire.sum.us":{"value":269137.9},"hikaricp.connections.min":{"value":2.0},"hikaricp.connections":{"value":2.0},"hikaricp.connections.timeout":{"value":0.0},"hikaricp.connections.pending":{"value":0.0},"hikaricp.connections.active":{"value":0.0},"hikaricp.connections.idle":{"value":2.0},"hikaricp.connections.creation.count":{"value":0},"hikaricp.connections.creation.sum.us":{"value":0.0}}}}

Thanks to your report, we have identified the source of the issue that is now reported here : https://github.com/elastic/apm/issues/347
The actual issue is due to how apm-server stores metrics, but there is no straightforward way to solve this at the moment.

The impact on application monitoring is that those conflicting metrics are simply ignored.

Currently there is no way to simply ignore those metrics with the disable_metrics configuration option due to https://github.com/elastic/apm-agent-java/pull/1421 . A fix will be available for this by tomorrow in our snapshot version. That will allow to selectively ignore metrics that conflict and keep all the other ones.

In the mean time, the simplest way is to temporary disable micrometer instrumentation by using disable_instrumentations. This will disable all micrometer metrics capture, which is a new feature of agent 1.18.0.

disable_instrumentations=experimental,micrometer

Or just append ,micrometer to this comma separated list if a value is already set.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.