Can i get the pod labels in the metricsets

Hi,

I am interested in using metric beat but I need to do some filtering of pods based on labels for some kibana visualizations....
Is there a way to do so?... I would have expected the pod metricset in kubernetes to support that, but it doesn't seem so.

I am not able to see the labels in the metricbeat output.

thanks,

Hi @pastorsx :slight_smile:

Please, can you post your current metricbeat configuration and the version you're using? In 6.4 all labels within a pod should be automatically attached to the metrics that are being sent.

Best regards!

Thanks Mario to come back at me.... here is the content of the values.yaml (I am using helm to deploy it). As you can see, the version is 6.4.2, so it should be supported.

(spacing seems not respected here, sorry)

image:
repository: docker.elastic.co/beats/metricbeat
tag: 6.4.2
pullPolicy: IfNotPresent

daemonset:
podAnnotations:
config:
metricbeat.config:
modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
in_cluster: true
output.file:
enabled: false
path: "/home/englab/metricbeat/data"
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
output.elasticsearch:
hosts: ["http://10.107.174.73:9200"]
modules:
system:
enabled: true
config:
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
processes: ['.
']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes:
enabled: true
config:
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
hosts: ["10.3.16.150:10255"]
enabled: true
add_metadata: true
in_cluster: true
deployment:
podAnnotations:
config:
metricbeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
in_cluster: true
output.file:
enabled: false
path: "/home/englab/metricbeat/data"
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
output.elasticsearch:
hosts: ["http://10.107.174.73:9200"]
modules:
kubernetes:
enabled: true
config:
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
hosts: ["kube-state-metrics:8080"]
add_metadata: true
in_cluster: true

plugins:

extraVolumes:
extraVolumeMounts:

resources: {}

I actually realized something today..... the labels still do work.. but not everywhere...

The kubernetes module using the state* metricsets works fine and labels are showing up:
kubernetes:
enabled: true
config:
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
hosts: ["kube-state-metrics:8080"]
add_metadata: true
in_cluster: true

but the kubernetes other metricsets dont' show any label... I tried different syntx as you can see.... but I just cannot have the labels in there... any idea if this is broken in 6.4.2:

kubernetes:
  enabled: true
  config:
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      hosts: ["localhost:10255"]
      enabled: true
      add_metadata: true
      in_cluster: true
      processors:
      - add_cloud_metadata:
      - add_kubernetes_metadata:
          in_cluster: true

thanks,

I am having the same issue. Are you able to resolve this issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.