[AKS] Failed to get additional AKS Cluster meta

After enabling kubernetes integration I got an errors on all agents:

{"log.level":"warn","@timestamp":"2024-09-03T07:21:25.158Z","message":"Failed to get additional AKS Cluster meta: failed to get AKS cluster name and ID: failed to advance page: DefaultAzureCredential: failed to acquire a token.\nAttempted credentials:\n\tEnvironmentCredential: missing environment variable AZURE_TENANT_ID\n\tWorkloadIdentityCredential: no client ID specified. Check pod configuration or set ClientID in the options\n\tManagedIdentityCredential: failed to authenticate a system assigned identity. The endpoint responded with {\"error\":\"invalid_request\",\"error_description\":\"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request\"}\n\tAzureCLICredential: fork/exec /bin/sh: operation not permitted\n\tAzureDeveloperCLICredential: fork/exec /bin/sh: operation not permitted","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"http/metrics-monitoring","type":"http/metrics"},"log":{"source":"http/metrics-monitoring"},"log.origin":{"file.line":142,"file.name":"add_cloud_metadata/provider_azure_vm.go","function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*azureMetadataFetcher).fetchMetadata"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"add_cloud_metadata","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-03T07:21:25.565Z","message":"add_cloud_metadata: received error failed with http status code 404","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":190,"file.name":"add_cloud_metadata/providers.go","function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}

Deployed by DaemonSet from manifest that was generated in elasticcloud side. Azure Logs integration works, only kubernetes failed. I see that service account and rolebindings/clusterrole exists.

Weird, but once I increased resources for pods this issue disappeared.
Resolved.