In case someone else stumbles across this with the same questions, this is what worked for us: Fleet managed elastic agent: Kubernetes Prometheus Metrics Autodiscover
Unfortunatly it’s not well documented and if you don’t have deep knowledge of Kubernetes it is not obvious how to do it.
Basically the Agent has the ability to autodiscover Pods with their respective IPs, annotations and so on. You can use this information in the Prometheus Integration to autodiscover any metrics-endpoints the same way a Prometheus does it. Your Pods should have the annotations or labels which you can use in the Integration configuration for the autodiscovery. So if the configuration in fleet looks like this:
Hosts: ${kubernetes.pod.ip}:${kubernetes.annotations.prometheus.io/port|’8080’}
Metrics Path: ${kubernetes.annotations.prometheus.io/path|'/metrics'}
Condition: ${kubernetes.annotations.prometheus.io/scrape} == ‘true’
The Elastic Agent now finds all pod IPs and reads their annotations. If the port annotation is not found, the agent uses the default 8080. Same goes for the metrics-path annotation. The prometheus.io/scrape-annotation is a must so that the agent only tries to get data from pods which actually have an custom metrics endpoint.
Example annotations in your pod:
annotations:
prometheus.io/scrape: true
prometheus.io/path: /custom/metrics
prometheus.io/port: 8090
You can also use labels instead or a mix of both. Labels are referenced as ${kubernetes.labels.mylabelname}
