APM Dashboard Showing Unrealistic Memory Allocation (1EB/m)

We’re running a Spring Boot application on Java 8 with the latest APM Java Agent (v1.52.0) and Elasticsearch 8.13.4. Despite following the setup instructions, the APM dashboard shows a memory peak of 1EB/m (exabytes per minute), as seen in the attached screenshot—clearly an impossible value for our setup.

There’s a load balancer in front of the APM servers, and we haven’t made any changes to index patterns or agent configurations.

Any idea what could be causing this? Could it be a configuration issue or a bug? We’d appreciate any advice on how to debug this! Logs and additional info available if needed.

Is this aggregating values from multiple instances?

Yes, it aggregates values from multiple pods, and that seems to be the issue. When filtering a single node, the metrics appear empty for each one. However, when aggregating two or more nodes, the metrics start showing unrealistic values. Could this be a bug in Kibana?

You can look at the underlying documents (eg in discover or directly querying ES), take that (or any) time sample, and see if it's incorrectly aggregating or whether you have some burst of instances and/or allocations