we are looking into monitoring Kubernetes Clusters with Elastic Agents using the Kubernetes Integration. We are using Fleet to manage our Elastic Agents. We are able to collect metrics from kubelet api, kube-state-metrics and so on.
Now we want to collect metrics provided by the applications via the /metrics endpoint. The “Prometheus Input” Integration has the ability to scrape metrics from endpoints, if you know the endpoints. But we want to use labels to autodiscover the /metrics-endpoints, similar to how Prometheus does it since we don’t know the endpints and don’t know how many applications have an endpoint.
The goal is to tell the developers which labels to place for their Kubernetes applications so that their application metrics get ingested by the Agent.
The question is: Is there a way to autodiscover /metric-endpoints based on labels using our setup?
Our Cluster and Agents are v. 9.1.5. We are using Elastic Package Registry 9.1.2
Unfortunatly it’s not well documented and if you don’t have deep knowledge of Kubernetes it is not obvious how to do it.
Basically the Agent has the ability to autodiscover Pods with their respective IPs, annotations and so on. You can use this information in the Prometheus Integration to autodiscover any metrics-endpoints the same way a Prometheus does it. Your Pods should have the annotations or labels which you can use in the Integration configuration for the autodiscovery. So if the configuration in fleet looks like this:
The Elastic Agent now finds all pod IPs and reads their annotations. If the port annotation is not found, the agent uses the default 8080. Same goes for the metrics-path annotation. The prometheus.io/scrape-annotation is a must so that the agent only tries to get data from pods which actually have an custom metrics endpoint.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.