Get Openshift Metrices via metricbeat in elasticsearch

Kibana version : 7.7.0

Elasticsearch version : 7.7.0

Metricbeat version : 7.7

Browser version : Chrome 84.0

Original install method (e.g. download page, yum, deb, from source, etc.) and version : RPM from download page

Fresh install or upgraded from other version? Fresh Install

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant) :

We want to implement metricbeat on OpenShift server using kubernetes.yml. The bearer token is inside the pod but not on the openshift server. So, how to give bearer token in kubernetes.yml. Will metricbeat pick the token by logging itself in the pods?

We tried to give ssl_verification_mode: none and comment the bearer token line but it didnt help and got below error -
Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden

We searched and found to give clusterroles but we didn't find anything related to clusterrole in default kubernetes.yml of metricbeat.

We have gone through several links but it didnt help. Please let us know the exact steps and commands that we can follow to let openshift monitored from ElasticSearch

1 Like

Hi!

Did you try to follow the steps described at https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html#_red_hat_openshift_configuration?

C.

Hi Chris,

Do we have to deploy the manifest available on - https://raw.githubusercontent.com/elastic/beats/7.7/deploy/kubernetes/metricbeat-kubernetes.yaml only?
And where to define cluster Role - in this manifest file or in metricbeat.yml?

Below is version of openshift installed in our environment -

oc v3.11.43
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
openshift v3.11.43
kubernetes v1.11.0+d4cacc0

Hi!

The file you mentioned includes everything regarding configurations, RBACs etc, so you just need to apply this one. Just make sure that you have made the required changes about Openshift that are mentioned at https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html#_red_hat_openshift_configuration.

C.

Thanks Chris.

Yes it had the steps. Main is if the below one is correct?

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:

  • apiGroups: [""]
    resources:
    • nodes
    • namespaces
    • events
    • pods
    • secrets
    • services
      verbs: ["get", "list", "watch"]
  • apiGroups: ["extensions"]
    resources:
    • replicasets
      verbs: ["get", "list", "watch"]
  • apiGroups: ["apps"]
    resources:
    • statefulsets
    • deployments
      verbs: ["get", "list", "watch"]
  • apiGroups:
    • ""
      resources:
    • nodes/stats
    • nodes/metrics
      verbs:
    • get
  • nonResourceURLs:
    • "/metrics"
      verbs:
    • get

I have added nodes/metrics below nodes/stats? Also, I have kube-state-metrics in another namespace already installed, so can I just change namespace of that kind or need to change everywhere in manifest file.

ClusterRole looks good to me!

In order to reach kube-state-metrics from another namespace you just need to tune the respective host config at

It should be something like hosts: ["kube-state-metrics.custom-namespace:8080"].

Example where Metricbeat runs on different namespace than kube-state-metrics:

Hi Chris,

While deploying metricbeat on openshift, we are getting below error. We have not applied security on Elasticsearch. Is it mandatory to apply security?

2020-09-04T11:08:04.753Z ERROR instance/beat.go:951 Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')
Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')

Hi!

I don't think so. Feel free to share your complete configuration to have a look.

C.

Hi,

Here we pulled image to our openshift server and replaced this path in yaml against image

Our kube-state-metrics exists in namespace openshift-monitoring, so we changed in yaml, as per your suggestion

Also, added metrics in cluster role, granted required permission using below command

oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:metricbeat

After doing all these changes, we ran “oc apply -f metricbeat-kubernetes.yaml” and were able to see pods in Running state.

Now, we can see data in Kubernetes Overview ECS (Metricbeat Kubernetes dashboard) only for nodes. For pods, controller etc. no data is coming.
Also, in YAML I could see connection to localhost:10249. Could you please help us to elaborate what it is used for, as might we need to replace localhost with other server. We tried with K8 master but no luck, still getting connection refused.

Metricbeat configuration - metricbeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    # To enable hints based autodiscover uncomment this:
    #metricbeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["https://${NODE_NAME}:10250"]
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      #ssl.verification_mode: "none"
      # If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,
      # remove ssl.verification_mode entry and use the CA, for instance:
      ssl.certificate_authorities:
        - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    # Currently `proxy` metricset is not supported on Openshift, comment out section
    - module: kubernetes
      metricsets:
        - proxy
      period: 10s
      host: ${NODE_NAME}
      hosts: ["ose-master01.test8.ads.spirnet.ph:10249"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: nexus.oce-mnl.ads.spirnet.ph:8099/metricbeat:7.9.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "10.131.111.32"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: data
          mountPath: /usr/share/metricbeat/data
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0640
          name: metricbeat-daemonset-config
      - name: modules
        configMap:
          defaultMode: 0640
          name: metricbeat-daemonset-modules
      - name: data
        hostPath:
          # When metricbeat runs as non-root user, this directory needs to be writable by group (g+w)
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  # This module requires `kube-state-metrics` up and running under `kube-system` namespace
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
        - state_cronjob
        - state_resourcequota
        # Uncomment this to get k8s events:
        #- event
      period: 10s
      host: ${NODE_NAME}
      hosts: ["kube-state-metrics.openshift-monitoring:8080"]
    #- module: kubernetes
    #  metricsets:
    #    - apiserver
    #  hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
    #  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    #  ssl.certificate_authorities:
    #    - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    #  period: 30s
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: nexus.oce-mnl.ads.spirnet.ph:8099/metricbeat:7.9.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: 10.131.111.32
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value:
        - name: ELASTICSEARCH_PASSWORD
          value:
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: metricbeat-deployment-config
      - name: modules
        configMap:
          defaultMode: 0640
          name: metricbeat-deployment-modules
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    k8s-app: metricbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - events
  - pods
  - secrets
  - services
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  - deployments
  verbs: ["get", "list", "watch"]
- apiGroups:
  - ""
  resources:
  - nodes/stats
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - "/metrics"
  verbs:
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
---

Hi Harshita,

In the yaml file, comment the username and password entries under "output.elasticsearch" section, if you have not configured the security at elasticsearch level.

Thanks and Regards,
Rakesh Chhabria

Thanks Rakesh for your response but no luck as we mentioned before. Once we comment out this username, we are getting below error.

>     env:
    - name: ELASTICSEARCH_HOST
      value: "10.131.111.32"
    - name: ELASTICSEARCH_PORT
      value: "9200"
    #- name: ELASTICSEARCH_USERNAME
    #  value: elastic
    #- name: ELASTICSEARCH_PASSWORD
    #  value: changeme
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    securityContext:
      runAsUser: 0
      # If using Red Hat OpenShift uncomment this:
      privileged: true

nod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/usr/share/metricbeat", "exe": "/usr/share/metricbeat/metricbeat", "name": "metricbeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter"}, "start_time": "2020-09-05T17:12:37.720Z"}}}
2020-09-05T17:12:38.001Z INFO instance/beat.go:299 Setup Beat: metricbeat; Version: 7.9.0
2020-09-05T17:12:38.001Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'metricbeat-7.9.0' as ILM is enabled.
2020-09-05T17:12:38.003Z INFO instance/beat.go:419 metricbeat stopped.
2020-09-05T17:12:38.003Z ERROR instance/beat.go:951 Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')
Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')

Thing is when we specify user and password which was there by default, pods come up but in logs we can see issue with connection to port 10249. Could you please suggest what does this port picks up and instead of localhost , what we need to specify. We tried with K8Master instead of localhost but same connection error.

2020-09-05T17:15:45.159Z INFO [index-management] idxmgmt/std.go:412 Set setup.template.pattern to 'metricbeat-7.9.0-*' as ILM is enabled.
2020-09-05T17:15:45.159Z INFO [index-management] idxmgmt/std.go:446 Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.9.0 {now/d}-000001} as ILM is enabled.
2020-09-05T17:15:45.159Z INFO [index-management] idxmgmt/std.go:450 Set settings.index.lifecycle.name in template to {metricbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2020-09-05T17:15:45.180Z INFO template/load.go:89 Template metricbeat-7.9.0 already exists and will not be overwritten.
2020-09-05T17:15:45.181Z INFO [index-management] idxmgmt/std.go:298 Loaded index template.
2020-09-05T17:15:45.196Z INFO [index-management] idxmgmt/std.go:309 Write alias successfully generated.
2020-09-05T17:15:45.211Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(http://10.131.111.32:9200)) established
2020-09-05T17:15:51.190Z INFO module/wrapper.go:259 Error fetching data for metricset kubernetes.proxy: error getting processed metrics: error making http request: Get "http://ose-master01.test8.ads.spirnet.ph:10249/metrics": dial tcp 10.122.104.96:10249: connect: connection refused

Hi @ChrsMark,
We are trying to get the stats for openshift, so we commented out proxy section in yaml file. Now, we are not getting error in logs related to 10249 port. But now we are only getting nodes metrices, not pod metrices. Is, there any specific config change that we are supposed to do in metricbeat-kubernetes.yaml file to get the data for the pods?

Also, metricbeat pod on openshift master node (ose-master01.test8.ads.spirnet.ph) is failing. Upon checking logs, we found "Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')" error. Can this be the cause of not getting pods metrices?

[root@ose-master01 metricbeat]# oc get pods -o wide|grep -i metricbeat
metricbeat-5f8d687f45-zw7bl 0/1 CrashLoopBackOff 14 47m 10.122.104.96 ose-master01.test8.ads.spirnet.ph
metricbeat-frffr 1/1 Running 0 47m 10.122.103.116 ssodb01.test8.ads.spirnet.ph
metricbeat-lscff 1/1 Running 0 47m 10.122.105.21 bil03.test8.ads.spirnet.ph
metricbeat-nsgvl 1/1 Running 0 47m 10.122.105.81 bal01.test8.ads.spirnet.ph
metricbeat-s4mf7 1/1 Running 0 47m 10.122.103.136 blcdb01.test8.ads.spirnet.ph
metricbeat-z269k 1/1 Running 0 47m 10.122.106.6 bil02.test8.ads.spirnet.ph

[root@ose-master01 metricbeat]# oc logs metricbeat-5f8d687f45-zw7bl
2020-09-07T11:31:31.768Z INFO instance/beat.go:640 Home path: [/usr/share/metricbeat] Config path: [/usr/share/metricbeat] Data path: [/usr/share/metricbeat/data] Logs path: [/usr/share/metricbeat/logs]
2020-09-07T11:31:31.770Z INFO instance/beat.go:648 Beat ID: bdfaba70-e4a6-45c6-a6f9-58639a4a7731
2020-09-07T11:31:31.771Z INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-09-07T11:31:31.771Z INFO [beat] instance/beat.go:976 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/metricbeat", "data": "/usr/share/metricbeat/data", "home": "/usr/share/metricbeat", "logs": "/usr/share/metricbeat/logs"}, "type": "metricbeat", "uuid": "bdfaba70-e4a6-45c6-a6f9-58639a4a7731"}}}
2020-09-07T11:31:31.771Z INFO [beat] instance/beat.go:985 Build info {"system_info": {"build": {"commit": "b2ee705fc4a59c023136c046803b56bc82a16c8d", "libbeat": "7.9.0", "time": "2020-08-11T20:16:10.000Z", "version": "7.9.0"}}}
2020-09-07T11:31:31.771Z INFO [beat] instance/beat.go:988 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.14.4"}}}
2020-09-07T11:31:31.773Z INFO [beat] instance/beat.go:992 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-06-18T03:49:37Z","containerized":true,"name":"ose-master01.test8.ads.spirnet.ph","ip":["127.0.0.1/8","10.122.116.146/22","10.122.104.96/22","10.122.108.161/22","10.122.112.156/22","172.17.0.1/16","172.18.0.1/23"],"kernel_version":"3.10.0-862.el7.x86_64","mac":["00:50:56:9a:eb:fb","00:50:56:9a:97:ee","00:50:56:9a:69:4c","00:50:56:9a:bd:d7","02:42:e8:57:6e:39","76:6f:14:37:20:dd","6a:ab:41:72:0e:46","8e:31:8c:66:98:31","2e:1d:55:5c:72:3a","2e:33:99:66:73:0a","22:50:22:03:d0:6c","b6:1c:05:aa:53:46","6a:5c:90:c5:fe:60","8a:f4:d6:fa:3e:90","d2:4c:ed:9b:8e:ab","d2:ba:d8:63:d1:90","96:68:f3:df:a1:b0"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":8,"patch":2003,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2020-09-07T11:31:31.773Z INFO [beat] instance/beat.go:1021 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/metricbeat", "exe": "/usr/share/metricbeat/metricbeat", "name": "metricbeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter"}, "start_time": "2020-09-07T11:31:30.930Z"}}}
2020-09-07T11:31:31.773Z INFO instance/beat.go:299 Setup Beat: metricbeat; Version: 7.9.0
2020-09-07T11:31:31.773Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'metricbeat-7.9.0' as ILM is enabled.
2020-09-07T11:31:31.773Z INFO instance/beat.go:419 metricbeat stopped.
2020-09-07T11:31:31.773Z ERROR instance/beat.go:951 Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')
Exiting: error initializing publisher: missing field accessing 'output.elasticsearch.username' (source:'/etc/metricbeat.yml')

We also tried to hardcode the elasticsearch ip and port in yaml file and comment out the variable part but it didn't help.

    output.elasticsearch:
      hosts: ['10.131.111.32:9200']
      #hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      #username: ${ELASTICSEARCH_USERNAME}
      #password: ${ELASTICSEARCH_PASSWORD}
---

Awaiting for your response so that we could see pods, controllers metrices. We are stuck

Hi!

Can you elaborate more on what Pod's metrics you don't collect? Something I use to do in order to debug such cases is to check from Kibana which metricsets ship metrics by querying sth like event.metricset: state_pod for instance. In this I can get an insight of which metricsets are functional and which not.

Regarding the issue with your master node, I really cannot see any reason of why this is happening. Maybe you can just delete -f metricbeat.yml and re-apply again so as to make sure that every configMap etc is properly updated.

C.

Hi @ChrsMark,

We tried delete -f and apply -f to have new configMap. Could you please suggest how to debug below error. We tried many things but not able to get metrics.

2020-09-09T11:31:11.436Z        INFO    module/wrapper.go:259   Error fetching data for metricset kubernetes.state_node: error doing HTTP request to fetch 'state_node' Metricset data: unexpected status code 401 from server
2020-09-09T11:31:11.477Z        ERROR   [kubernetes.state_deployment]   state_deployment/state_deployment.go:98 unexpected status code 401 from server
2020-09-09T11:31:11.477Z        ERROR   [kubernetes.state_pod]  state_pod/state_pod.go:101      unexpected status code 401 from server
2020-09-09T11:31:11.567Z        ERROR   [kubernetes.state_replicaset]   state_replicaset/state_replicaset.go:98 unexpected status code 401 from server
2020-09-09T11:31:21.036Z        ERROR   [kubernetes.state_resourcequota]        state_resourcequota/state_resourcequota.go:73   unexpected status code 401 from server
2020-09-09T11:31:21.235Z        INFO    module/wrapper.go:259   Error fetching data for metricset kubernetes.state_cronjob: error getting metrics: unexpected status code 401 from server
2020-09-09T11:31:21.235Z        INFO    module/wrapper.go:259   Error fetching data for metricset kubernetes.state_container: error getting event: unexpected status code 401 from server
2020-09-09T11:31:21.435Z        INFO    module/wrapper.go:259   Error fetching data for metricset kubernetes.state_node: error doing HTTP request to fetch 'state_node' Metricset data: unexpected status code 401 from server
2020-09-09T11:31:21.477Z        ERROR   [kubernetes.state_deployment]   state_deployment/state_deployment.go:98 unexpected status code 401 from server
2020-09-09T11:31:21.478Z        ERROR   [kubernetes.state_pod]  state_pod/state_pod.go:101      unexpected status code 401 from server
2020-09-09T11:31:21.567Z        ERROR   [kubernetes.state_replicaset]   state_replicaset/state_replicaset.go:98 unexpected status code 401 from server

Hi!

This error indicates that Metricbeat cannot access kube_state_metrics.

You need to make sure that you have properly configured the module to talk to the correct endpoint (host setting).
You can exec inside the Metricbeat pod and try to access kube_state_metrics endpoint using curl to check that the service is reachable.

C.

Hi @ChrsMark,

We have checked connectivity using curl command and we were able to connect to the endpoint from metricbeat pod.

TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"

curl -k --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt "https://kube-state-metrics.openshift-monitoring.svc:8443" -H "Authorization: Bearer $TOKEN"

<html>
             <head><title>Kube Metrics Server</title></head>
             <body>
             <h1>Kube Metrics</h1>
                         <ul>
             <li><a href='/metrics'>metrics</a></li>
             <li><a href='/healthz'>healthz</a></li>
                         </ul>
             </body>
</html>

But still we are getting below error in logs.

2020-09-11T08:00:22.297Z        INFO    module/wrapper.go:259   Error fetching data for metricset kubernetes.state_container: error getting event: unexpected status code 401 from server
2020-09-11T08:00:22.512Z        INFO    module/wrapper.go:259   Error fetching data for metricset kubernetes.state_node: error doing HTTP request to fetch 'state_node' Metricset data: unexpected status code 401 from server
2020-09-11T08:00:22.514Z        ERROR   [kubernetes.state_pod]  state_pod/state_pod.go:101      unexpected status code 401 from server
2020-09-11T08:00:22.514Z        ERROR   [kubernetes.state_deployment]   state_deployment/state_deployment.go:98 unexpected status code 401 from server
2020-09-11T08:00:22.597Z        ERROR   [kubernetes.state_replicaset]   state_replicaset/state_replicaset.go:98 unexpected status code 401 from server 

Hi,

We have resolved it by giving bearer token and ssl certificate for kube-state-metrics using below code-

host: {NODE_NAME} hosts: ["https://kube-state-metrics.openshift-monitoring.svc:8443"] bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token ssl.certificate_authorities: - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - module: kubernetes metricsets: - apiserver hosts: ["https://{KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
period: 30s

Now, we are just getting Overview in Kubernetes ECS Dashboard. But we are not getting any data related to controllers,schedulers etc in dashboard. So, we are missing on the main attributes of pods, containers etc. like CPU, Memory. How via logs or commands we can check if metricbeat is able to gather those stats. As in metricbeat pod logs there are no errors as such.

We are looking forward for your assistance on this, else it will be of no use in just getting the number of Pods.
If you can give us sample that what all data this manifest file collects.

Openshift internally stores these metrics using node exporter to prometheus then display it in Grafana. How can we directly see it via ElasticSearch/Kibana

You can find CPU, Memory etc of Hosts/Pods/Containers in Metrics app of Kibana: https://www.elastic.co/guide/en/metrics/guide/current/metrics-app-overview.html

Hi,

We have openshift in our environment and only getting Kubernetes Overview ECS. We are still not getting Proxy and Controller data. Also, we want to install metricbeat on particular namespace i.e Optima and want details like node/pod etc. of this particular namespace.

Please suggest how we can change the polling interval for metricbeat data from 10 sec to 5 minutes.

1 Like

Please let us know the if you have something on my last question... Also how to change time of collection metrics

Couldn't find anything in manifest file.