@stephenb
Well I spent the whole day today to make it work but I couldn't.
I set up Elasticsearch, kibana and metricbeat v8.2.2 as you did above on docker desktop.
- I couldn't collect metrics at first place I dowlonaded metricbeat using
curl -L -O https://raw.githubusercontent.com/elastic/beats/8.2/deploy/kubernetes/metricbeat-kubernetes.yaml
and only updated namespace. And deployed kube-stats-metrics
as well, but getting following error:
{"log.level":"error","@timestamp":"2022-06-14T15:09:12.461Z","log.origin":{"file.name":"module/wrapper.go","file.line":254},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: error making http request: Get \"https://docker-desktop:10250/stats/summary\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)","service.name":"metricbeat","ecs.version":"1.6.0"}
kube-stats-metrics
is working fine, here are the logs:
I0614 14:37:35.500515 1 server.go:93] Using default resources
I0614 14:37:35.500650 1 types.go:136] Using all namespace
I0614 14:37:35.500681 1 server.go:122] Metric allow-denylisting: Excluding the following lists that were on denylist:
W0614 14:37:35.500704 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0614 14:37:35.571949 1 server.go:250] Testing communication with server
I0614 14:37:35.754692 1 server.go:255] Running with Kubernetes cluster version: v1.22. git version: v1.22.5. git tree state: clean. commit: 5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e. platform: linux/amd64
I0614 14:37:35.754761 1 server.go:257] Communication with server successful
I0614 14:37:35.755702 1 server.go:202] Starting metrics server: [::]:8080
I0614 14:37:35.755986 1 metrics_handler.go:96] Autosharding disabled
I0614 14:37:35.757064 1 builder.go:232] Active resources: certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
I0614 14:37:35.801428 1 server.go:191] Starting kube-state-metrics self metrics server: [::]:8081
I0614 14:37:35.834605 1 server.go:66] levelinfomsgTLS is disabled.http2false
I0614 14:37:35.834680 1 server.go:66] levelinfomsgTLS is disabled.http2false
- Even I ignored the previous error (1.), I did the same as you did to get index per namespace, this is the configuration I did for
output.elasticsearch
:
setup.template.enabled : false
setup.ilm.enabled: true
setup.ilm.policy_name: "metricbeat"
setup.ilm.rollover_alias: "metricbeat-%{[agent.version]}-%{[kubernetes.namespace]}"
setup.ilm.pattern: "{now/d}-000001"
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
index: "metricbeat-%{[agent.version]}-%{[kubernetes.namespace]}"
but was getting the following errors:
{"log.level":"error","@timestamp":"2022-06-14T15:22:14.788Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: temporary bulk send failure","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-06-14T15:22:16.053Z","log.origin":{"file.name":"module/wrapper.go","file.line":254},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: error making http request: Get \"https://docker-desktop:10250/stats/summary\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-06-14T15:22:16.631Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: temporary bulk send failure","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-06-14T15:22:18.070Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: temporary bulk send failure","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-06-14T15:22:20.050Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: temporary bulk send failure","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-06-14T15:22:21.690Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: temporary bulk send failure","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-06-14T15:22:23.112Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: temporary bulk send failure","service.name":"metricbeat","ecs.version":"1.6.0"}
I tested that on local docker v20.10.14 and windows [version 10.0.19042.1706].