Metricbeat missing kube-state-metrics

I have ELK stack with filebeat & metricbeat reporting to logstash. Filebeat works fine.

I can see kubernetes.. and system.. in the Metricbeat data but do not see kube-state-metrics data and cannot figure out what I am doing wrong.

Entire stack is running in kubernetes with the following versions:
Elastic - marketplace.gcr.io/google/elasticsearch/ubuntu16_04:6.3
Logstash - docker.elastic.co/logstash/logstash:6.3.2
Kibana - marketplace.gcr.io/google/elastic-gke-logging/kibana:6.3

metricbeat - docker.elastic.co/beats/metricbeat:6.4.0
kube-state-metrics - quay.io/coreos/kube-state-metrics:v1.2.0

kube-state-metrics is running in kube-system namespace (as is metricbeat)

I don't see any errors in the logs and I am at a loss.

Any ideas? or troubleshooting tips?

Hi @violetaria,

Could you share the configuration you are using to deploy metricbeat?

Yes, here are the YAMLs I am using, I've XXXX out certain parts and not included the ClusterRoleBindinbg, ClusterRole, and ServiceAccount pieces but hopefully this is enough info.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    # To enable hints based autodiscover uncomment this:
    #metricbeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      host: ${NODE_NAME}
    #      hints.enabled: true

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.logstash:
      hosts: ${LOGSTASH_HOST:logstash}:${LOGSTASH_PORT:5044}
    # output.elasticsearch:
    #   hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
    #   username: ${ELASTICSEARCH_USERNAME}
    #   password: ${ELASTICSEARCH_PASSWORD}

    fields: {environment: "${ENVIRONMENT}", cluster_name: "${CLUSTER_NAME}"}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      enabled: true
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["localhost:10255"]
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.logstash:
      hosts: ${LOGSTASH_HOST:logstash}:${LOGSTASH_PORT:5044}
    # output.elasticsearch:
    #   hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
    #   username: ${ELASTICSEARCH_USERNAME}
    #   password: ${ELASTICSEARCH_PASSWORD}

    fields: {environment: "${ENVIRONMENT}", cluster_name: "${CLUSTER_NAME}"}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  # This module requires `kube-state-metrics` up and running under `kube-system` namespace
  kube-state-metrics.yml: |-
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
        # Uncomment this to get k8s events:
        # - event
      period: 10s
      host: ${NODE_NAME}
      hosts: ["kube-state-metrics:8080"]
  # kubernetes-api-server.yml: |-
  #   - module: kubernetes
  #     enabled: true
  #     metricsets:
  #       - apiserver
  #     hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
---

Above was ConfigMaps and below is DaemonSet and Deployments

---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:6.4.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: LOGSTASH_HOST
          value: XX.XX.XX.XX
        - name: LOGSTASH_PORT
          value: "XXXX"
        - name: ENVIRONMENT
          value: XXXX
        - name: CLUSTER_NAME
          value: XXXX
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-modules
      - name: data
        hostPath:
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:6.4.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        env:
        - name: LOGSTASH_HOST
          value: XX.XX.XX.XX
        - name: LOGSTASH_PORT
          value: "XXXX"
        - name: ENVIRONMENT
          value: XXXX
        - name: CLUSTER_NAME
          value: XXXX
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-modules
---

@violetaria your configuration looks fine, could you please double-check the logs of the instance deployed with the Deployment for any error? If you still don't see anything, you can try to enable debug logging for kubernetes with -d kubernetes, the args should be modified for that to be like this:

        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-d", "kubernetes",
        ]

You can also try to confirm that kube-stats-metrics is healthy and available in kube-state-metrics:8080 from a pod in the kube-system namespace.

I did not see any errors on start up. I see some errors during run time

2018-09-18T14:13:32.833Z	ERROR	kubernetes/watcher.go:254	kubernetes: Watching API error EOF
2018-09-18T14:13:32.833Z	INFO	kubernetes/watcher.go:238	kubernetes: Watching API for resource events
2018-09-18T14:13:32.840Z	ERROR	kubernetes/watcher.go:254	kubernetes: Watching API error proto: illegal wireType 6
2018-09-18T14:13:32.840Z	INFO	kubernetes/watcher.go:258	kubernetes: Ignoring event, moving to most recent resource version
2018-09-18T14:13:32.840Z	INFO	kubernetes/watcher.go:238	kubernetes: Watching API for resource events
2018-09-18T14:13:36.800Z	INFO	[monitoring]	log/log.go:141	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"tic

I've also confirmed kube-state-metrics is reporting ok (we have had this data come up in Prometheus but are trying to move to ELK stack). I see data like this from kube-state-metrics.

kube_deployment_status_replicas_unavailable{deployment="kube-dns-autoscaler",namespace="kube-system"} 0
kube_deployment_status_replicas_unavailable{deployment="kube-state-metrics",namespace="kube-system"} 0
kube_deployment_status_replicas_unavailable{deployment="kubernetes-dashboard",namespace="kube-system"} 0

Question: Will the kube-state-metrics come over with those specific names?
ex. kube_deployment_status_replicas_unavailable

thanks

seems like version we are using 6.4.0 has some bugs so i am going to try downgrading everything to 6.3.2 and see if that helps

HI, would like to know more than this,i have the exact same issue , the only thing was it was working till yesterday and now it doesnt, no change in the configuration, and more details is already provided by the above person and almost all the things that he checked, i have checked the same.

We decided to not use ELK stack. metricbeat never fully delivered all of the kubernetes + kube-state-metrics data. I tried sending the logs through Logstash and directly to ElasticSearch and it was always missing data.

Sorry I don't have more info for you :frowning:

1 Like

Well, same thing is happening to me, i cannot see metrics for cpu, memory for kubernetes .

I can see for node, deployment, pods, but still its unstable, so if i uncordon one worker node, or if i reboot a worker node, instead of showing one node less, it shows zero.Everything goes blank, not sure whether its bcoz of daemonset as i am running metricbeat both as daemonset and deployment.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.