How to get kubernetes node labels with metricbeat?

Hi there,

I'm trying to enrich node, or state_node metrics of kubernetes module with k8s node labels but without any luck.
I'm using metricbeat 7.6.0.

My configuration is the following for the daemonset

    - module: kubernetes
      add_metadata: true
      labels.dedot: true
      annotations.dedot: true
        - container
        - node
        - pod
        - system
        - volume
      host: "${HOSTNAME}"
      hosts: ["https://${HOSTNAME}:10250"]
      bearer_token_file: /var/run/secrets/
        - /etc/kubelet/ssl/kubelet.crt

and the following for the single-pod deployment

    # State metrics from kube-state-metrics service:
    - module: kubernetes
      enabled: true
      labels.dedot: true
      annotations.dedot: true
        - state_node
        - state_deployment
        - state_replicaset
        - state_statefulset
        - state_pod
        - state_container
        - state_cronjob
        - state_service
        - state_persistentvolume
        - state_persistentvolumeclaim
      period: 10s
      hosts: ["kube-state-metrics:8080"]
      add_metadata: true

I've read this answer, where is said that "In 6.4 all the kubernetes metricsets will also collect labels when possible out of the box", so I was expecting to find them in the node ore state_node metrics.

Am I missing something?


1 Like

Hmm, could you share any logs of Metricbeat so as to check if add_kubernetes_metadata processor is being enabled without a problem?


I'm having the same issue. I have the same config as Federico and I get no kubernetes.labels.* fields on the metricbeat posts.

Here's the output from metricbeat when started. I see no reference to add_kubernetes_metadata:

2020-03-02T10:40:12.993Z INFO instance/beat.go:622 Home path: [/usr/share/metricbeat] Config path: [/usr/share/metricbeat] Data path: [/usr/share/metricbeat/data] Logs path: [/usr/share/met
2020-03-02T10:40:13.016Z INFO instance/beat.go:630 Beat ID: a63ee65a-52c2-447a-9315-ac8d5ba275d4
2020-03-02T10:40:13.017Z INFO [api] api/server.go:62 Starting stats endpoint
2020-03-02T10:40:13.018Z INFO [api] api/server.go:64 Metrics endpoint listening on: (configured: localhost)
2020-03-02T10:40:13.018Z INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-03-02T10:40:13.018Z INFO [beat] instance/beat.go:958 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/metricbeat", "data": "/usr/share/metricbeat/data", "home"
: "/usr/share/metricbeat", "logs": "/usr/share/metricbeat/logs"}, "type": "metricbeat", "uuid": "a63ee65a-52c2-447a-9315-ac8d5ba275d4"}}}
2020-03-02T10:40:13.018Z INFO [beat] instance/beat.go:967 Build info {"system_info": {"build": {"commit": "6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c", "libbeat": "7.6.0", "time": "2020
-02-05T23:10:10.000Z", "version": "7.6.0"}}}
2020-03-02T10:40:13.018Z INFO [beat] instance/beat.go:970 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.13.7"}}}
2020-03-02T10:40:13.019Z INFO [beat] instance/beat.go:974 Host info
2020-03-02T10:40:13.019Z INFO [beat] instance/beat.go:1003 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setg
"cwd": "/usr/share/metricbeat", "exe": "/usr/share/metricbeat/metricbeat", "name": "metricbeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-03-02T10:40:12
2020-03-02T10:40:13.020Z INFO instance/beat.go:298 Setup Beat: metricbeat; Version: 7.6.0
2020-03-02T10:40:13.020Z INFO [index-management] idxmgmt/std.go:182 Set output.elasticsearch.index to 'metricbeat-7.6.0' as ILM is enabled.
2020-03-02T10:40:13.020Z INFO elasticsearch/client.go:174 Elasticsearch url:
2020-03-02T10:40:13.021Z INFO [publisher] pipeline/module.go:110 Beat name:
2020-03-02T10:40:13.043Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-03-02T10:40:13.043Z INFO instance/beat.go:439 metricbeat start running.
2020-03-02T10:40:14.587Z INFO pipeline/output.go:95 Connecting to backoff(elasticsearch())
2020-03-02T10:40:14.654Z INFO elasticsearch/client.go:757 Attempting to connect to Elasticsearch version 7.6.0
2020-03-02T10:40:14.663Z INFO [license] licenser/es_callback.go:50 Elasticsearch license: Platinum
2020-03-02T10:40:14.670Z INFO [index-management] idxmgmt/std.go:258 Auto ILM enable success.
2020-03-02T10:40:14.682Z INFO [index-management.ilm] ilm/std.go:139 do not generate ilm policy: exists=true, overwrite=false
2020-03-02T10:40:14.682Z INFO [index-management] idxmgmt/std.go:271 ILM policy successfully loaded.
2020-03-02T10:40:14.682Z INFO [index-management] idxmgmt/std.go:410 Set to '{metricbeat-7.6.0 {now/d}-000001}' as ILM is enabled.
2020-03-02T10:40:14.682Z INFO [index-management] idxmgmt/std.go:415 Set setup.template.pattern to 'metricbeat-7.6.0-*' as ILM is enabled.
2020-03-02T10:40:14.682Z INFO [index-management] idxmgmt/std.go:449 Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.6.0 {now/d}-000001} as ILM is enabled.
2020-03-02T10:40:14.682Z INFO [index-management] idxmgmt/std.go:453 Set in template to {metricbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"m
ax_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2020-03-02T10:40:14.725Z INFO template/load.go:89 Template metricbeat-7.6.0 already exists and will not be overwritten.
2020-03-02T10:40:14.725Z INFO [index-management] idxmgmt/std.go:295 Loaded index template.
2020-03-02T10:40:14.733Z INFO [index-management] idxmgmt/std.go:306 Write alias successfully generated.
2020-03-02T10:40:14.739Z INFO pipeline/output.go:105 Connection to backoff(elasticsearch()) established
2020-03-02T10:40:43.044Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":240,"time":{"ms":249}},"total

Hey I would suggest using the processor standalone in metricbeat.yml configuration:

Also there was an issue with kubernetes.labels.* in 7.6 which should be fixed in 7.6.1:

Any update on this?
I'm having the same issue at the moment.

Thanks for your reply.

At which metricset should be added? node, scraped by the daemonset, or state_node, sc
raped by the single-replica deployment? I tried both but I'm still not able to see anything.

Adding the processor without any options should be enough?

Furthermore, the documentation says: " The add_kubernetes_metadata processor annotates each event with relevant metadata based on which Kubernetes pod the event originated from.", what does that means?

In my opinion the documentation about this processor is very superficial and confused...

Looking at the modified files in the pull request that you linked I suppose the problem you are talking about is related only to the pod label, isn't it?

Hey @Federico_Bevione!

All kubernetes metricsets will add metadata by default so you don't need to add add_metadata: true option on your configs in oder to have this feature enabled. No need to try with add_kubernetes_metadata processor, since we have spot the issue.

Speaking of this, I was able to reproduce it and there is a bug indeed. Thank you for reporting it! We recently added support for multiple resources in autodiscover and there were some issues like the one I mentioned in the previous post. The one we have here is another one related to the same changes.

There is a PR that aims to fix this:


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.