Did some tests on the new Infrastructure ui in Kibana 6.7.1. Hosts metrics are working fine, but Kubernetes metrics seem to be missing some data. Although the pod names, nodes and namespaces are recognised, all metrics report 0%, both for cpu, memory, the inbound / outbound traffic all show 0bit/s.
Metricbeat data is generated on 1 master Openshift node with following module config:
What's happening is that metadata enrichment is focused on a single node, as our deployment mode recommends un metricbeat instance per node in the cluster.
I'm wondering, why did you go with a single instance?
@exekias Short story => We started with running it as a daemonset in our Openshift cluster, but got too many issues. Even had to open a ticket with Red Hat, where Red Hat actually said running Beats as a daemonset is 'unsupported'.
So we tried running it on 1 node where we ran into several authorization issues. the kube config sv account had to be adjusted, and also our master nodes are the only nodes which have the correct certificates (which are in /etc/origin/master/)..
ssl.certificate: "crt"
ssl.key: "key"
Our Openshift admin didn't want to copy those certificates to the other (non-master) Openshift nodes.
As it was allowed to configure multiple hosts, see the hosts directive, this seemed like a good idea. All metrics ware indexed correctly and we never really had the need for the extra metadata (for Metricbeat data, we did need it for Filebeat data) The metadata was added correctly but only on the node running the Metricbeat with the kubernetes module it seems now.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.