Dec 23rd, 2018: [EN][Observability] Querying metrics from Prometheus with Elastic Stack


(Tanya Bragin) #1

Prometheus exposition format has become a popular way to export metrics from a number of systems. It provides client libraries for defining custom metrics within your application and shipping those metrics to a monitoring server. The community has also created a number of exporters for common systems. There is an ongoing proposal within CNCF, the open source foundation that is home to Prometheus, to create an OpenMetrics standard that is heavily influenced by the Prometheus exposition format.

Before we dive into the details, it is important to review the key components of Prometheus:

  • Prometheus Exporters. These are components that are present in the monitored application. They expose an API via which a monitoring system can scrape metrics periodically in a pull fashion.
  • Prometheus Server. Prometheus server is one of the monitoring systems out there that can scrape Prometheus exporters. It provides a query language, and relies on Grafana for metrics visualization.

Prometheus server does not support clustering, so sharding, high availability, and horizontal scaling must be taken care of by the user externally to the system. Prometheus server also does not support fine-grained security, such as data encryption in transit. As a result, while it’s a great way to get started with metrics, users often look to other systems for long-term storage of metrics accessible via Prometheus exporters.

Elastic is increasingly used as a single operational data store for logs, metrics, and trace data. As a result, we get asked whether it is possible to ingest metrics from Prometheus exporters or integrate with Prometheus server.

This blog addresses common ways to easily do just that.

Scraping Prometheus exporters with Metricbeat

Metricbeat is a lightweight shipper purpose-built to work with the Elastic Stack. It can fetch data from dozens of sources and provides modules for automatically storing this data in Elasticsearch and displaying dashboards in Kibana.

One of the data sources it integrates with is Prometheus exporters. This can be done with the Metricbeat Prometheus module, which will query Prometheus exporters at the user-defined frequency. This method does not require Prometheus server to be in place, as the communication is directly between Metricbeat and Prometheus exporters.

Below is a sample configuration for querying Prometheus via this module. Note that it does support TLS for ensuring secure data transfer.

metricbeat.modules:
- module: prometheus
  metricsets: ["collector"]
  enabled: true
  period: 10s
  hosts: ["localhost:9090"]
  #metrics_path: /metrics
  #namespace: example

  # This can be used for service account based authorization:
  # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  #ssl.certificate_authorities:
  # - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt

Once the data is flowing, you will see documents with structure similar to this one in Elasticsearch:

{
  "@timestamp": "2017-10-12T08:05:34.853Z",
  "beat": {
    "hostname": "host.example.com",
    "name": "host.example.com"
  },
  "metricset": {
    "host": "prometheus:9090",
    "module": "prometheus",
    "name": "collector",
    "namespace": "collector",
    "rtt": 115
  },
  "prometheus": {
    "collector": {
      "label": {
        "event": "add",
        "role": "node"
        },
      "prometheus_sd_kubernetes_events_total": {
        "value": 0
      }
    }
  }
}

Retrieving metrics from Prometheus server

If you already have a Prometheus server in place and would like to retrieve metrics directly from it, there are currently two ways of doing that.

Metricbeat Prometheus module

Prometheus provides a Federate API, which can be leveraged to retrieve all metrics from a Prometheus server, using a URL with the following format: http://<prometheus_url>/federate?match[]={__name__!=""}

To configure the Metricbeat Prometheus module to do that, you can use the following configuration.

metricbeat.modules:
- module: prometheus
  period: 10s
  hosts: ["<prometheus_url>"]
  metrics_path: '/federate'
  query:
    'match[]': '{__name__!=""}'
  namespace: example

Community “prometheusbeat”

Prometheusbeat is a beat that can receive Prometheus metrics via the remote write feature. It is listed on the Prometheus integrations page and is supported by a community member.

Monitoring the health of Prometheus server

If you are interested in monitoring the health of the Prometheus server as well, there is also a “stats” metricset within the Metricbeat Prometheus module that will help you do that.

In metricbeat.yml you can configure that as follows:

metricbeat.modules:
- module: prometheus
  metricsets: ["stats"]
  enabled: true
  period: 10s
  hosts: ["localhost:9090"]
  #metrics_path: /metrics
  #namespace: example

The resulting document will have the following structure:

{
  "@timestamp": "2017-10-12T08:05:34.853Z",
  "beat": {
    "hostname": "host.example.com",
    "name": "host.example.com"
  },
  "metricset": {
    "host": "prometheus:9090",
    "module": "prometheus",
    "name": "stats",
    "rtt": 115
  },
  "prometheus": {
    "stats": {
      "notifications": {
        "dropped": 0,
        "queue_length": 0
      },
      "processes": {
        "open_fds": 25
      },
      "storage": {
        "chunks_to_persist": 0
      }
    }
  }
}

Tell us what you think!

We hope that you found this information useful! If you have any further questions, please do not hesitate to engage with us.

We have three forums where you can ask questions related to observability and operational analytics.

  • Logs, for everything related to the Logs app – setup with Filebeat, Filebeat modules, and using the Kibana Logs app.
  • Infrastructure, for everything related to the Infrastructure app – Filebeat and Metricbeat, modules, Kibana dashboards, and using the Kibana Infrastructure app.
  • APM, for everything related to APM – whether it is the APM Server, the Kibana dashboards, or the agents.

(Tanya Bragin) #2

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.