Hi team,
i successfully getting from my multiple cluster the metrics in Kibana.
But now i see that i don't want ALL metrics from a cluster the pods..
Example: one Cluster has 15 Pods but i need total 8 Pod of them.
Is it possible to hide/enable the pods in metricbeat? How to configure it? Any ideas ?
Thank you
Can someone please help me ?
Hello @Swathi12 ,
The metricbeat is being deployed as daemonset in all nodes of your kubernetes cluster, thus you collect metrics from all nodes by default.
What you want can be achieved with the use of templates in your manifest. You can define a specific condition based on eg. labels that your 8 pod have in common and you can collect metrics only for them.
Reference under Metricbeat supports templates for modules:
Another relevant example:
templates:
- condition.and:
- equals:
kubernetes.namespace: "namespace1"
- equals:
kubernetes.namespace: "namespace2"
Let me know if that helps
Hi @Andreas_Gkizas
thanks for your quick answer. I need from 16 Pods only 8 Pods.. is that possible ?
Unfortunately i have an error..
Kubernetes is showing this:
Exiting: error loading config file: yaml: line 21: did not find expected key
My .yaml file
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
unique: true
templates:
- condition.and:
- equals:
kubernetes.deployment.name: "ils"
- equals:
kubernetes.deployment.name: "ilp"
- config:
Try this one:
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
templates:
- condition.or:
- equals:
kubernetes.namespace: "kube-system"
- equals:
kubernetes.namespace: "nginx"
config:
@Andreas_Gkizas
i don't really understand why to use "kube-system" and "nginx". I need to filter the PODS (from 16 Pods i need 8 Pods )
and i choose the field kubernetes.deployment.name and NOT kubernetes.namespace
getting this error:
Exiting: error loading config file: yaml: line 21: did not find expected key
Exiting: error loading config file: yaml: line 21: did not find expected key
Exiting: error loading config file: yaml: line 21: did not find expected key
Exiting: error loading config file: yaml: line 21: did not find expected key
YAMl:
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
#unique: true
templates:
- condition.or:
- equals:
kubernetes.deployment.name: "ils"
- equals:
kubernetes.deployment.name: "ilp"
That was an example, you can use whatever conditions you think it matches your scenario.
kubernetes.deployment.name is not available in the kubernetes provider to be used in your conditions. Please have a look here for the available fields:
And here you can find more examples for your conditions:
@Andreas_Gkizas
i just tried with your given example and its again saying this error: Any idea what is wrong in .YAML fIle?
Exiting: error loading config file: yaml: line 21: did not find expected key
Yaml file starting from Line 17
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
#unique: true
templates:
- condition.and:
- equals:
kubernetes.namespace: "xxxxx"
- equals:
kubernetes.namespace: "xxxxx"
I probably suspect an indentation error. From the config you attach can you please move condition on the left and change and with or. (It is a logical or is not it? You need either one or other namespace)
My example:
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
unique: true
templates:
- condition.or:
- equals:
kubernetes.namespace: "kube-system"
- equals:
kubernetes.namespace: "nginx"
config:
- module: kubernetes
hosts: ["kube-state-metrics:8080"]
period: 10s
add_metadata: true
metricsets:
- state_node
- state_deployment
- state_daemonset
- state_replicaset
- state_pod
- state_container
- state_job
- state_cronjob
- state_resourcequota
- state_statefulset
- state_service
- state_persistentvolume
- state_persistentvolumeclaim
- state_storageclass
- module: kubernetes
metricsets:
- apiserver
hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
period: 30s
@Andreas_Gkizas thank you again.
i just set the conditons but why its showing still all pod names? Somehow the filtering is not working correctly
i have total 3 namespaces but instead of 2 its showing 3
I have been testing in my local cluster as well, the system module is enabled by default (although you dont specify it in the manifest).
So can you check from which metricset the "extra" namespace comes from?
See an example from my local tests with a filter applied and also with metricset.name visible
If you want to disable system metricset just comment:
# metricbeat.config.modules:
# # Mounted `metricbeat-daemonset-modules` configmap:
# path: ${path.config}/modules.d/*.yml
# # Reload module configs as they change:
# reload.enabled: false
Also your namespaces are ils/* ?
You can change your filter from equal to contains and be more specific:
eg.
-contains:
kubernetes.namespace: "xxxxx"
x
@Andreas_Gkizas
i checked with metricset.name and i m getting this

Is the .yml file okai ?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
#metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
#path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
#reload.enabled: false
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
unique: true
templates:
- contains:
kubernetes.namespace: "xxxxxxx"
config:
- module: kubernetes
hosts: ["kube-state-metrics:8080"]
period: 10s
add_metadata: true
metricsets:
- state_node
- state_deployment
- state_daemonset
- state_replicaset
- state_pod
- state_container
- state_job
- state_cronjob
- state_resourcequota
- state_statefulset
- state_service
- state_persistentvolume
- state_persistentvolumeclaim
- state_storageclass
- module: kubernetes
metricsets:
#- apiserver
# hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
period: 30s