Hi es community,
I have deployed ES on kube using the operator as well as metricbeat and everything is working as expected except for metricbeats autodiscover prometheus.
After enabling debugging, I can see metricbeat appears to succesfully scrape the prometheus metrics using autodiscover, however those metrics cannot be found in the backend ES stack. Here is an example of the metrics we are looking for "vertx" from the metricbeat debug logs and that we CANNOT find in ES via kibana discover:
},
"service": {
"address": "http://10.88.0.53:8080/manage/prometheus",
"type": "prometheus"
},
"prometheus": {
"labels": {
"code": "200",
"method": "GET",
"route": "/>/manage>/ready",
"instance": "10.88.0.53:8080",
"job": "prometheus"
},
"metrics": {
"vertx_http_server_requests_total": 512,
"vertx_http_server_response_bytes_max": 70,
"vertx_http_server_response_bytes_sum": 35840,
"vertx_http_server_response_bytes_count": 512,
"vertx_http_server_response_time_seconds_max": 0.075965206,
"vertx_http_server_response_time_seconds_sum": 0.907193048,
"vertx_http_server_response_time_seconds_count": 512
}
},
"event": {
"dataset": "prometheus.collector",
"module": "prometheus",
"duration": 7149238
},
"agent": {
"name": "ip-10-88-0-110.eu-west-2.compute.internal",
"type": "metricbeat",
"version": "7.10.0",
"hostname": "ip-10-88-0-110.eu-west-2.compute.internal",
"ephemeral_id": "6cd32120-ae4e-4c14-8979-facce12553b9",
"id": "9e349224-7d33-48d4-a177-4b79e0cf4073"
},
"ecs": {
"version": "1.6.0"
The pod annotations look like this:
apiVersion: v1
kind: Pod
metadata:
annotations:
co.elastic.metrics/hosts: ${data.host}:${data.port}
co.elastic.metrics/metrics_path: /manage/prometheus
co.elastic.metrics/metricsets: collector
co.elastic.metrics/module: prometheus
co.elastic.metrics/period: 1m
creationTimestamp: "2020-11-27T17:26:14Z"
generateName: email-validator-k8s-dep-7bf58dc94-
labels:
app: email-validator-k8s
The metricbeats ConfigMap deployment is here:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: logging
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
unique: true
templates:
- config:
- module: kubernetes
hosts: ["kube-state-metrics:8080"]
period: 10s
add_metadata: true
metricsets:
- state_node
- state_deployment
- state_daemonset
- state_replicaset
- state_pod
- state_container
- state_cronjob
- state_resourcequota
- state_statefulset
- module: kubernetes
metricsets:
- apiserver
hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
period: 30s
# Uncomment this to get k8s events:
#- module: kubernetes
# metricsets:
# - event
# To enable hints based autodiscover uncomment this:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
processors:
- add_cloud_metadata:
- add_host_metadata:
- add_kubernetes_metadata:
- add_docker_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
ssl.verification_mode: "none"
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: logging
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${NODE_NAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
# If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,
# remove ssl.verification_mode entry and use the CA, for instance:
#ssl.certificate_authorities:
#- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
# Currently `proxy` metricset is not supported on Openshift, comment out section
- module: kubernetes
metricsets:
- proxy
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10249"]
I cannot find these metrics anywhere in ES from kibana. Is there anything I am missing in this setup?
Perhaps you need the metricbeat-setup.yaml too?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-init-config
namespace: logging
labels:
k8s-app: metricbeat-init
data:
metricbeat.yml: |-
setup.template.settings:
index.mapping.total_fields.limit: 10000
metricbeat.modules:
- module: nginx
- module: kubernetes
- module: docker
- module: system
- module: mysql
- module: postgresql
- module: redis
- module: istio
- module: prometheus
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
# Set this to none as es operator brilliantly comes preconfigured for https. Too much hassle to load the CA though so just disable checking here.
ssl.verification_mode: "none"
hosts: ['${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
ssl.verification_mode: "none"
hosts: ['${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}']
host: "https://kube-poc2-kb-http.logging.svc.cluster.local:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
Looking forward to hearing any pointers on this. Thanks in advance for any help you can give me.