Using Metricbeat 7.7.0 as a sidecar to scrape Prometheus metrics in a Kubernetes (GKE) pod. The configuration is very simple:
metricbeat.modules:
- module: prometheus
period: ${PERIOD}
host: ${NODE_NAME}
hosts: ["localhost:8082"]
metrics_path: '/metrics'
use_types: true
rate_counters: true
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
I checked that the ServiceAccount has privileges to read from the Kubernetes API. The logs from the metricbeat container look like this:
2020-05-14T04:16:11.659Z INFO add_kubernetes_metadata/kubernetes.go:71 add_kubernetes_metadata: kubernetes env detected, with version: v1.15.9-gke.24
2020-05-14T04:16:11.659Z INFO [kubernetes] kubernetes/util.go:94 kubernetes: Using pod name service-558c4b9d57-7z6wj and namespace service to discover kubernetes node {"libbeat.processor": "add_kubernetes_metadata"}
2020-05-14T04:16:11.666Z INFO [kubernetes] kubernetes/util.go:100 kubernetes: Using node %s discovered by in cluster pod node querygke-cluster-1-service-2-a0ed-n9sl {"libbeat.processor": "add_kubernetes_metadata"}
2020-05-14T04:16:13.814Z WARN [cfgwarn] collector/data.go:47 BETA: Prometheus 'use_types' setting is beta
2020-05-14T04:16:13.814Z WARN [cfgwarn] collector/data.go:50 EXPERIMENTAL: Prometheus 'rate_counters' setting is experimental
2020-05-14T04:16:14.817Z INFO [publisher_pipeline_output] pipeline/output.go:101 Connecting to backoff(elasticsearch(https://<id>.europe-west2.gcp.elastic-cloud.com:443))
2020-05-14T04:16:14.868Z INFO [esclientleg] eslegclient/connection.go:263 Attempting to connect to Elasticsearch version 7.7.0
2020-05-14T04:16:14.871Z INFO [license] licenser/es_callback.go:51 Elasticsearch license: Platinum
2020-05-14T04:16:14.872Z INFO [esclientleg] eslegclient/connection.go:263 Attempting to connect to Elasticsearch version 7.7.0
2020-05-14T04:16:14.874Z INFO [index-management] idxmgmt/std.go:258 Auto ILM enable success.
2020-05-14T04:16:14.876Z INFO [index-management.ilm] ilm/std.go:139 do not generate ilm policy: exists=true, overwrite=false
2020-05-14T04:16:14.876Z INFO [index-management] idxmgmt/std.go:271 ILM policy successfully loaded.
2020-05-14T04:16:14.876Z INFO [index-management] idxmgmt/std.go:410 Set setup.template.name to '{metricbeat-7.7.0 {now/d}-000001}' as ILM is enabled.
2020-05-14T04:16:14.876Z INFO [index-management] idxmgmt/std.go:415 Set setup.template.pattern to 'metricbeat-7.7.0-*' as ILM is enabled.
2020-05-14T04:16:14.876Z INFO [index-management] idxmgmt/std.go:449 Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.7.0 {now/d}-000001} as ILM is enabled.
2020-05-14T04:16:14.876Z INFO [index-management] idxmgmt/std.go:453 Set settings.index.lifecycle.name in template to {metricbeat {"policy":{"phases":{"delete":{"actions":{"delete":{}},"min_age":"15d"},"hot":{"actions":{"rollover":{"max_age":"1d","max_size":"10GB"}}}}}}} as ILM is enabled.
2020-05-14T04:16:14.878Z INFO template/load.go:89 Template metricbeat-7.7.0 already exists and will not be overwritten.
2020-05-14T04:16:14.878Z INFO [index-management] idxmgmt/std.go:295 Loaded index template.
2020-05-14T04:16:14.880Z INFO [index-management] idxmgmt/std.go:306 Write alias successfully generated.
2020-05-14T04:16:14.881Z INFO [publisher_pipeline_output] pipeline/output.go:111 Connection to backoff(elasticsearch(https://<id>.europe-west2.gcp.elastic-cloud.com:443)) established
2020-05-14T04:16:41.652Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":52}},"total":{"ticks":200,"time":{"ms":208},"value":200},"user":{"ticks":150,"time":{"ms":156}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"1d275e53-8c84-445d-93c2-1e9d4172ebfc","uptime":{"ms":30069}},"memstats":{"gc_next":17847088,"memory_alloc":12908096,"memory_total":31243512,"rss":75874304},"runtime":{"goroutines":40}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":11,"batches":1,"total":11},"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0,"published":11,"retry":11,"total":11},"queue":{"acked":11}}},"metricbeat":{"prometheus":{"collector":{"events":11,"success":11}}},"system":{"cpu":{"cores":8},"load":{"1":0.18,"15":0.16,"5":0.15,"norm":{"1":0.0225,"15":0.02,"5":0.0188}}}}}}
2020-05-14T04:17:11.652Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":5}},"total":{"ticks":210,"time":{"ms":14},"value":210},"user":{"ticks":160,"time":{"ms":9}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"1d275e53-8c84-445d-93c2-1e9d4172ebfc","uptime":{"ms":60069}},"memstats":{"gc_next":17847088,"memory_alloc":13211808,"memory_total":31547224},"runtime":{"goroutines":40}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"system":{"load":{"1":0.11,"15":0.15,"5":0.13,"norm":{"1":0.0138,"15":0.0188,"5":0.0163}}}}}}
No Kubernetes metadata makes it into Elasticsearch. Am I configuring something wrong or misunderstanding how this is supposed to be used?