Metricbeat Prometheus Autodiscovery

Trying to get metricbeat to scrape prometheus metrics from a service running in EKS.
I have followed examples and docs to no avail.

And probably many others...

I am using the official Helm charts to deploy metricbeat.
metricbeat-values.yaml

---
daemonset:
  # Include the daemonset
  enabled: true
  hostNetworking: false
  # Allows you to add any config files in /usr/share/metricbeat
  # such as metricbeat.yml for daemonset
  metricbeatConfig:
    metricbeat.yml: |
      metricbeat.config.modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false
      metricbeat.modules:
      - module: kubernetes
        metricsets:
          - container
          - node
          - pod
          - system
          - volume
        period: 10s
        host: "${NODE_NAME}"
        hosts: ["https://${NODE_NAME}:10250"]
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        ssl.verification_mode: "none"
        processors:
        - add_kubernetes_metadata: ~
      - module: kubernetes
        enabled: true
        metricsets:
          - event
      - module: system
        period: 10s
        metricsets:
          - cpu
          - load
          - memory
          - network
          - process
          - process_summary
        processes: ['.*']
        process.include_top_n:
          by_cpu: 5
          by_memory: 5
      - module: system
        period: 1m
        metricsets:
          - filesystem
          - fsstat
        processors:
        - drop_event.when.regexp:
            system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
      output.elasticsearch:
        hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
      metricbeat.autodiscover:
        providers:
          - type: kubernetes
            host: ${HOSTNAME}
            hints.enabled: true
            templates:
              - condition.equals:
                  kubernetes.annotations.prometheus.io/scrape: "true"
                config:
                  - module: prometheus
                    period: 10s
                    hosts: ["${data.host}:${data.kubernetes.annotations.prometheus.io/port}"]
                    metrics_path: /metrics
  securityContext:
    runAsUser: 0
    privileged: false
  resources:
    requests:
      cpu: "100m"
      memory: "100Mi"
    limits:
      cpu: "1000m"
      memory: "200Mi"
  tolerations: []

deployment:
  metricbeatConfig:
    metricbeat.yml: |
      metricbeat.config.modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: true
      mericbeat.autodiscover:
        providers:
          - type: kubernetes
            host: ${HOSTNAME}
            templates:
              - condition.equals:
                  kubernetes.annotations.prometheus.io/scrape: "true"
                config:
                  - module: prometheus
                    period: 10s
                    # Prometheus exporter host / port
                    hosts: ["${data.host}:${data.kubernetes.annotations.prometheus.io/port}"]
                    metrics_path: /metrics
      metricbeat.modules:
      - module: kubernetes
        enabled: true
        metricsets:
          - state_node
          - state_deployment
          - state_replicaset
          - state_pod
          - state_container
        period: 10s
        hosts: ["${KUBE_STATE_METRICS_HOSTS}"]
      output.elasticsearch:
        hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
  nodeSelector: {}
  securityContext:
    runAsUser: 0
    privileged: false
  resources:
    requests:
      cpu: "100m"
      memory: "100Mi"
    limits:
      cpu: "1000m"
      memory: "200Mi"
  tolerations: []

# Replicas being used for the kube-state-metrics metricbeat deployment
replicas: 1

# Root directory where metricbeat will write data to in order to persist registry data across pod restarts (file position and other metadata).
hostPathRoot: /var/lib

image: "docker.elastic.co/beats/metricbeat"
imageTag: "7.15.0"
imagePullPolicy: "IfNotPresent"

# How long to wait for metricbeat pods to stop gracefully
terminationGracePeriod: 30

updateStrategy: RollingUpdate

kube_state_metrics:
  enabled: true
  # host is used only when kube_state_metrics.enabled: false
  host: ""

Here are the annotations section from the POD/Container I am trying to scrape:

Annotations:  
              co.elastic.metrics.sifnode/hosts: ${data.host}:1317
              co.elastic.metrics/hosts: ${data.host}:1317
              co.elastic.metrics/metricsets: collector
              co.elastic.metrics/module: prometheus
              co.elastic.metrics/period: 1m
              kubernetes.io/psp: eks.privileged
              prometheus.io/port: 1317
              prometheus.io/scrape: true

I have confirmed that I am able to get the appropriate response when hitting the /metrics endpoint on the pod, and I do in fact get a response.

However the metrics are not showing up in Elasticsearch in the metricbeats index.

Bump...

Could really use some help getting this figured out.

Hi @Lance_Z !

As far as I can see you are using kubernetes.annotations.prometheus.io/scrape: "true" as condition so you don't actually using hints but templates instead. Is this what you actually want to do?

In general using

co.elastic.metrics/hosts: ${data.host}:1317
co.elastic.metrics/metricsets: collector
co.elastic.metrics/module: prometheus
co.elastic.metrics/period: 1m

as hints in the Pod's annotations and sth like

- type: kubernetes
  node: ${NODE_NAME}
  hints.enabled: true

would do the trick.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.