The metricbeat is being deployed as daemonset in all nodes of your kubernetes cluster, thus you collect metrics from all nodes by default.
What you want can be achieved with the use of templates in your manifest. You can define a specific condition based on eg. labels that your 8 pod have in common and you can collect metrics only for them.
Reference under Metricbeat supports templates for modules:
thanks for your quick answer. I need from 16 Pods only 8 Pods.. is that possible ?
Unfortunately i have an error..
Kubernetes is showing this:
Exiting: error loading config file: yaml: line 21: did not find expected key
My .yaml file
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
unique: true
templates:
- condition.and:
- equals:
kubernetes.deployment.name: "ils"
- equals:
kubernetes.deployment.name: "ilp"
- config:
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
templates:
- condition.or:
- equals:
kubernetes.namespace: "kube-system"
- equals:
kubernetes.namespace: "nginx"
config:
i don't really understand why to use "kube-system" and "nginx". I need to filter the PODS (from 16 Pods i need 8 Pods )
and i choose the field kubernetes.deployment.name and NOT kubernetes.namespace
getting this error:
Exiting: error loading config file: yaml: line 21: did not find expected key
Exiting: error loading config file: yaml: line 21: did not find expected key
Exiting: error loading config file: yaml: line 21: did not find expected key
Exiting: error loading config file: yaml: line 21: did not find expected key
YAMl:
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
#unique: true
templates:
- condition.or:
- equals:
kubernetes.deployment.name: "ils"
- equals:
kubernetes.deployment.name: "ilp"
That was an example, you can use whatever conditions you think it matches your scenario.
kubernetes.deployment.name is not available in the kubernetes provider to be used in your conditions. Please have a look here for the available fields:
And here you can find more examples for your conditions:
i just tried with your given example and its again saying this error: Any idea what is wrong in .YAML fIle?
Exiting: error loading config file: yaml: line 21: did not find expected key
Yaml file starting from Line 17
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
node: ${NODE_NAME}
# In large Kubernetes clusters consider setting unique to false
# to avoid using the leader election strategy and
# instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
#unique: true
templates:
- condition.and:
- equals:
kubernetes.namespace: "xxxxx"
- equals:
kubernetes.namespace: "xxxxx"
I probably suspect an indentation error. From the config you attach can you please move condition on the left and change and with or. (It is a logical or is not it? You need either one or other namespace)
I have been testing in my local cluster as well, the system module is enabled by default (although you dont specify it in the manifest).
So can you check from which metricset the "extra" namespace comes from?
See an example from my local tests with a filter applied and also with metricset.name visible
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.