Kubernetes pods found, but all metrics 0%

Hello,

Did some tests on the new Infrastructure ui in Kibana 6.7.1. Hosts metrics are working fine, but Kubernetes metrics seem to be missing some data. Although the pod names, nodes and namespaces are recognised, all metrics report 0%, both for cpu, memory, the inbound / outbound traffic all show 0bit/s.

Metricbeat data is generated on 1 master Openshift node with following module config:

- module: kubernetes
  metricsets:
    - node
    - system
    - pod
    - container
    - volume
  period: 10s
  in_cluster: false
  add_metadata: true
  kube_config: ../.kube/config
  host: "node1"
  hosts: ["https://node1:10250","https://node2:10250",.......]
  ssl.certificate_authorities: ["/etc/pki/ca-trust/source/anchors/openshift-ca.crt"]
  ssl.certificate: "crt"
  ssl.key: "key"

All kubernetes metricbeat data is in metricbeat-*. CHecking the data in discovery, I can find related metrics:

image

As you can see, we are not running a daemonset (as this caused us too many headaches in Openshift), could it be related to in_cluster: false?

Is there something else which can cause the kubernetes metrics in the Infrastructure ui to be '0'?

Thanks.

Willem

@willemdh When you click on the metric detail pages, are the charts empty?

@simianhacker Thanks for the fast reply. When I click on the metric details page, There seems to be no data..

## There is no data to display.

Try adjusting your time or filter. 

Please suggest what else I should check to get this working.

Best regards,

Willem

@simianhacker Found the cause of the problem.

We only run the Metricbeat kubernetes module on 1 master node with:

- module: kubernetes
  metricsets:
    - node
    - system
    - pod
    - container
    - volume
  period: 10s
  in_cluster: false
  add_metadata: true
  kube_config: ../.kube/config
  host: "node1"
  hosts: ["https://node1:10250","https://node2:10250",.......]
  ssl.certificate_authorities: ["/etc/pki/ca-trust/source/anchors/openshift-ca.crt"]
  ssl.certificate: "crt"
  ssl.key: "key"

So it seem like:

hosts: ["https://node1:10250","https://node2:10250",.......]

work to get the metrics from the pods, but not to get the metadata for the metrics of those pods. The metadata is only added for the host in:

host: "node1"

I was able to workaround this issue by duplicating the kubernetes module config for each Openshift node like this:

- module: kubernetes
  metricsets:
    - node
    - system
    - pod
    - container
    - volume
  period: 10s
  in_cluster: false
  add_metadata: true
  kube_config: ../.kube/config
  host: "node1"
  hosts: ["https://node1:10250"]
  ssl.certificate_authorities: ["/etc/pki/ca-trust/source/anchors/openshift-ca.crt"]
  ssl.certificate: "crt"
  ssl.key: "key"

- module: kubernetes
  metricsets:
    - node
    - system
    - pod
    - container
    - volume
  period: 10s
  in_cluster: false
  add_metadata: true
  kube_config: ../.kube/config
  host: "node2"
  hosts: ["https://node2:10250"]
  ssl.certificate_authorities: ["/etc/pki/ca-trust/source/anchors/openshift-ca.crt"]
  ssl.certificate: "crt"
  ssl.key: "key"

This seem weird.. Why would Elastic allow to configure multiple hosts, while adding the metadaa works only for the host directive?

Or is there something wrong with my config I'm not seeing? The documentation is very sparse in how this metadat is added..

Grtz

Willem

Hi @willemdh,

What's happening is that metadata enrichment is focused on a single node, as our deployment mode recommends un metricbeat instance per node in the cluster.

I'm wondering, why did you go with a single instance?

Best regards

@exekias Short story => We started with running it as a daemonset in our Openshift cluster, but got too many issues. Even had to open a ticket with Red Hat, where Red Hat actually said running Beats as a daemonset is 'unsupported'.
So we tried running it on 1 node where we ran into several authorization issues. the kube config sv account had to be adjusted, and also our master nodes are the only nodes which have the correct certificates (which are in /etc/origin/master/)..

ssl.certificate: "crt"
ssl.key: "key"

Our Openshift admin didn't want to copy those certificates to the other (non-master) Openshift nodes.

As it was allowed to configure multiple hosts, see the hosts directive, this seemed like a good idea. All metrics ware indexed correctly and we never really had the need for the extra metadata (for Metricbeat data, we did need it for Filebeat data) The metadata was added correctly but only on the node running the Metricbeat with the kubernetes module it seems now.

Thanks for the explanation, your current approach seems correct.

We thought about adding a global scope to metadata processing & autodiscover, that feature may come in the future, that would simplify your settings.

Br

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.