New GC metrics in Elastic 8.10

We are using elastic stack 8.10.1 and elastic java agent 1.42.
With new changes to GC collection and memory pools usage, would like to clarify the following

  1. GC collection count per min is too high (i see 510000 per min)
  2. GC collection time per min is also really high (8,600,500 per min)

Is there any document with description on what it means. For GC collection time, is it collection GC pause time (excluding concurrent mark time as application will be running during concurrent phase)

Heap memory usage by pool shows only old generation usage.

Is that showing the combined metrics of several services with the same service name?

First issue is its not filtering when i navigate from instances list in APM service overview page. I need to click on metrics tab to get the new view with additional metrics
Then i see node name as filter(it should be container or pod in description). Metrics not available with single select. Once i select more than 1, started seeing metrics with really high number. Once i change time range to longer duration, numbers are down to 3 digits.
Could someone clarify or link to document with metrics description for CPU, system Memory(is it cgroup stats against container memory limit), GC heap memory, GC time.
Particularly with GC time its not clear on what it means. is it application pause time, how its been aggregated (Average?) with long duration?

The GC times are the ones produced by GarbageCollectorMXBean.getCollectionTime(). The details are provided with that method. Broadly, this should be pause times for the young generation but full cycle times for the old gen (so the old gen times tend not to be that useful for analysis). These are correct in the elastic documents, but they are accumulated times not steps, and it's possible that they don't display correctly when there are multiple services being shown, hence my question, it will help for me to try and reproduce. From your answer it looks like you are indeed looking at stats for multiple services in one graph. I'll look at whether there are issues for that scenario