Metricbeat on kubernetes - Desired pods and Available pods are wrong


(Abdulkadir Dalga) #1

Hi. I have deployed metricbeat daemonset on my kubernetes cluster with predefined kibana dashboards. I also deployed kube-state-metrics and I configure metricbeat according to it. I get metrics and seems good. However, althoguh I have 72 pods according to kubectl, metricbeat shows desired and available pods as 205.

I've used https://raw.githubusercontent.com/elastic/beats/6.3/deploy/kubernetes/metricbeat-kubernetes.yaml for daemonset configuration. I only added following module on configmap.

  • module: kubernetes
    enabled: true
    metricsets:
    • state_node
    • state_deployment
    • state_replicaset
    • state_statefulset
    • state_pod
    • state_container
      period: 10s
      hosts: ["kube-state-metrics:8080"]

ps -> I used 6.3.0 images for elasticsearch, kibana and metricbeat.


(Carlos Pérez Aradros) #2

Hi @akd,

To be sure, can you double check both numbers manually?

In kubernetes:

kubectl get pods --all-namespaces | wc -l

In Kibana:

Go to discover, using metricbeat-* index pattern and filter for metricset.name: state_pod and check if you find something wrong

Best regards


(Abdulkadir Dalga) #3

I inspected the issue and realized that I have 41 pods from deployments and 30 from daemonset. metricbeat does not support daemonset stats yet therefore the desired and available pod stats should show 41. However, it shows 205 where I have 5 nodes on my kubernetes. This means it multiplies the pod count with number of nodes. Is there any configuration for this situation or is this a bug?


(Carlos Pérez Aradros) #4

uhm that should not happen, although 41 x 5 == 205 is really suspicious. Did you modify the default period by a chance? Visualizations are minded for 10s period


(Abdulkadir Dalga) #5

I checked periods. They are 10 seconds. When I shut one node down it shows 164. It definitely multiplies with number of nodes. Interesting issue.


(Carlos Pérez Aradros) #6

arg sorry, I finally understood what's going on. I didn't read your first post correctly (sorry again). state_* metricsets are global metrics. So by deploying them in the daemonset you are multiplying the output by the number of nodes.

This is why our reference manifests use an extra Deployment (apart from the Daemonset) for these: https://github.com/elastic/beats/blob/6.3/deploy/kubernetes/metricbeat-kubernetes.yaml#L164-L250

As for the daemonset metrics, we are working on them :slight_smile: Have a look to: https://github.com/elastic/beats/issues/7058

Best regards


(Abdulkadir Dalga) #7

Thanks. I didn't see thls comment # Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics. My bad.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.