Default Dashboard "[Metricbeat Kubernetes] Overview ECS" doesn't work

Hello!
I am trying to configure monitoring of my Kubernetes cluster. For this, I did:

  1. Elasticsearch server and kibana version 7.1.1
  2. Installed filebeat and metricbeat 7.1.1 on my Kubernetes cluster via yaml files.
  3. Installed the standard metricbeat kubernetes dashboards.

I met some problems:

  1. Some dashboards "[Metricbeat Kubernetes] Overview ECS" work, some do not have data (screenshot "metricbeat"). Although there should be enough data
  2. The dashboards on the "[Metricbeat Kubernetes] API server ECS" page are completely empty.

How can I configure the correct transfer and display of data in my Kibana?

The settings for transmitting metrics in yaml I used the following:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    # To enable hints based autodiscover uncomment this:
    metricbeat.autodiscover:
      providers:
        - type: kubernetes
          host: ${NODE_NAME}
          hints.enabled: true

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
    setup.dashboards.enabled: true
    setup.kibana.host: "172.16.0.184:5601"
    setup.kibana.protocol: "http"
    setup.kibana.username: "user"
    setup.kibana.password: "password"
    setup.template.settings:
        index.number_of_shards: 5
        index.number_of_replicas: 1
        index.number_of_routing_shards: 30
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["${HOST_IP}:10255"]

photo_2019-10-10_18-47-17

xwiz,

Are there any errors in your metricbeat logs, if so can you paste us a snippet?

Another thing: It looks like the dashboards you're trying to use require both the state_deployment and state_node metricsets, which I don't see enabled.

I have only one type of errors. There is an example on the screenshot:
%D0%B8%D0%B7%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D0%B5

This log I received from filebeat and this error appear only for metricbeat's containers. It's strange because I use oss filebeat 7.1.1, oss metricbeat 7.1.1 and oss elasticsearch 7.1.1

Maybe can I give a conclusion from some command to make it clearer?

But I have this lines in my yaml:

data:
  # This module requires `kube-state-metrics` up and running under `kube-system` namespace
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container

This is not enough?

I tried to find all metricsets in my elasticsearch:

GET metricbeat-*/_search
{
  "size": 0, 
  "query": {
      "exists": {
          "field": "metricset.name"
      }
  },
  "aggs" : {
      "products" : {
          "terms" : {
              "field" : "metricset.name",
              "size" : 500
          }
      }
  }
}

I get the result:

{
  "took" : 33,
  "timed_out" : false,
  "_shards" : {
    "total" : 7,
    "successful" : 7,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "products" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "volume",
          "doc_count" : 1305543
        },
        {
          "key" : "network",
          "doc_count" : 1262298
        },
        {
          "key" : "process",
          "doc_count" : 725722
        },
        {
          "key" : "container",
          "doc_count" : 714964
        },
        {
          "key" : "pod",
          "doc_count" : 629736
        },
        {
          "key" : "system",
          "doc_count" : 272216
        },
        {
          "key" : "cpu",
          "doc_count" : 101099
        },
        {
          "key" : "load",
          "doc_count" : 101099
        },
        {
          "key" : "memory",
          "doc_count" : 101097
        },
        {
          "key" : "process_summary",
          "doc_count" : 101097
        },
        {
          "key" : "node",
          "doc_count" : 101096
        },
        {
          "key" : "filesystem",
          "doc_count" : 16863
        },
        {
          "key" : "fsstat",
          "doc_count" : 16863
        }
      ]
    }
  }
}

There is not state_node. So I changed filter of dashbord "Kubernetes - Nodes ECS" from

event.module:kubernetes AND metricset.name:state_node

to

event.module:kubernetes AND metricset.name:node

and now it works:

But I don't receive any deployment states, other dashboard don't work. How do I get these states?

Hi @xwiz,

from the data you are sending I'd say you are not fetching any state metrics data.
Can you make sure that:

  • kube-state-metrics is deployed
  • that the kube-state-metrics service is exposed and reachable to metricbeat?

If the service is called kube-state-metrics and it is configured on 8080 (default settings if you use their upstream manifests), and if metricbeat is deployed in cluster, this should work:

- module: kubernetes
  enabled: true
  metricsets:
    - state_deployment
    - state_node
    - state_pod
    - state_container
  period: 10s
  hosts: ["kube-state-metrics.kube-system:8080"]
  in_cluster: true

Hi Pablo,
I deployed kube-state-metrics before and service was enabled:

[root@kube2 metric_conf]# kubectl get svc --all-namespaces
NAMESPACE              NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-system            kube-state-metrics          ClusterIP      10.105.70.199   <none>        8080/TCP,8081/TCP        6d15h

I use next yaml kube-states-metrics:

---
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kube-state-metrics
rules:
- apiGroups: [""]
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs: ["list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  - ingresses
  verbs: ["list", "watch"]
- apiGroups: ["apps"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  - statefulsets
  verbs: ["list", "watch"]
- apiGroups: ["batch"]
  resources:
  - cronjobs
  - jobs
  verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
  resources:
  - horizontalpodautoscalers
  verbs: ["list", "watch"]
- apiGroups: ["policy"]
  resources:
  - poddisruptionbudgets
  verbs: ["list", "watch"]
- apiGroups: ["certificates.k8s.io"]
  resources:
  - certificatesigningrequests
  verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
  resources:
  - storageclasses
  verbs: ["list", "watch"]
- apiGroups: ["autoscaling.k8s.io"]
  resources:
  - verticalpodautoscalers
  verbs: ["list", "watch"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: kube-state-metrics
  name: kube-state-metrics
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: kube-state-metrics
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kube-state-metrics
    spec:
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: quay.io/coreos/kube-state-metrics:v1.7.1
        ports:
        - name: http-metrics
          containerPort: 8080
        - name: telemetry
          containerPort: 8081
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: kube-system
  labels:
    k8s-app: kube-state-metrics
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
  - name: http-metrics
    port: 8080
    targetPort: http-metrics
    protocol: TCP
  - name: telemetry
    port: 8081
    targetPort: telemetry
    protocol: TCP
  selector:
    k8s-app: kube-state-metrics

Yesterday I tried to use yaml from this artcle. But metricsets "state_node", "state_deployment", "state_replicaset", "state_pod", "state_container" are still unreacheable.

It looks like my metricbeat cannot access kube-states-metrics. How can I check and fix it?

I found this error in my elastic:

Why does metricbeat cannot connect to kube-state-metrics if service run correctly on default ports?

I have resolved this problem. For this I changed line "hosts" to next view:

hosts: ["kube-state-metrics.kube-system.svc.cluster.local:8080"]

Now all of my dashboards work:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.