APM Server upgradation to 7.17.6 is consuming more resources

Hi Elastic Fans...
I am in the process of upgrading ElasticAPMServer from 7.14.2 to 7.17.6
Things were working as expected in 7.14.2. To avoid vulnerability checks, we upgraded to 7.16.6
APMServer is managed by operator. Earlier three pods were sufficient for my sandbox/development cluster, now for the similar load HPA kicks in ... and I need to go till 10 pods with a reduction in throughput.

ask: are we making any mistakes? btw, grouping is done by 'kubernetes.pod.name' in below visualisations

Followed documentation w.r.t. breaking changes if any.. from here..

As far as my understanding goes w.r.t. breaking changes technically I had to just comment ilm /warm and change the version number in my manifest for apm-server.

Question:
Are there any hints that you can provide (we did not change any queue size and other sensitive parameters).
We made sure that support matrix for the elastic stack of products are followed.

Unfortunately, we did not have internal monitoring nor xpack based metricbeat monitoring enabled for APMServer... So we do not have comparison data (before and after upgrade). But to be frank existing custom created dashboard/visualization is enough to indicate that there is an issue.

Hello!...
any clues?