Kubernetes metadata

I'm trying to test the new kubernetes metadata features that were added to beats and therefore would like to pull the docker image for the alpha, but it appears it is not being published.

Could it be published please?

Also appears the github repo links to the logstash documentation - https://github.com/elastic/beats-docker

Hi @djschny,

Images are published under official elastic docker registry, it should be: docker.elastic.co/beats/filebeat:6.0.0-alpha2

Take a look to https://www.elastic.co/guide/en/beats/filebeat/master/running-on-docker.html for complete documentation.

Replace filebeat with your desired beat :wink:

Also, as you saw, these are pretty recent additions, so any feedback is welcomed!

Thanks, it pulled the image, but the kubernetes module appears to not work:

Exiting: error initializing processors: the processor add_kubernetes_metadata doesn't exist

I updated the configuration file as outlined https://www.elastic.co/guide/en/beats/filebeat/master/add-kubernetes-metadata.html

I found the issue,

add_kubernetes_metadata was renamed recently (will be the definitive name in next version). In alpha2 it was kubernetes. The rest of the documentation should be valid. I'm checking our published docs.

Thanks! That did the trick, but now receiving the following:

2017/06/26 20:32:42.265027 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/06/26 20:32:42.270519 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
2017/06/26 20:32:43.271056 podwatcher.go:82: INFO kubernetes: Watching API for pod events
2017/06/26 20:32:43.275282 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
2017/06/26 20:32:44.275967 podwatcher.go:82: INFO kubernetes: Watching API for pod events

This is with Kubernetes 1.6.6

Hi,

1.6.X should be supported, could you please share some more details on your setup? I'm interested in how you launch filebeat and what settings are you using. It should be able to access k8s api server, normally this is easy within a pod, but it may require some more params if you are launching it from outside the cluster

Perhaps it's an authentication error with k8s API (with a poor error message)

Sure, it's running as a pod. See below for the kubernetes YAML:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat-k8s-logging
  namespace: kube-instrumentation
  labels:
    app: filebeat
    function: logging
spec:
  template:
    metadata:
      labels:
        app: filebeat
        function: logging
      name: filebeat
    spec:
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.0.0-alpha2
        resources:
          limits:
            cpu: 50m
            memory: 50Mi
        securityContext:
          privileged: true
          runAsUser: 0
        env:
          - name: ELASTICSEARCH_URL
            value: http://es-k8s-logging:9200
      terminationGracePeriodSeconds: 30

I made sure to set "in_cluster: true" as well.

Your pod settings are looking good, I can tell you are using kube-instrumentation namespace, you will need to tell filebeat like this:

kubernetes:
  in_cluster: true
  namespace: kube-instrumentation

Apart from that everything is looking good, so I would suggest to enable debug info for kubernetes processor, when launching filebeat add this flag: -d kubernetes

Thanks for the tip. I tried setting that, but still no luck. With debugging enabled, not much else exciting comes out that would help us. The following is the only items that stood out to me:

2017/06/26 22:49:06.519807 podwatcher.go:82: INFO kubernetes: Watching API for pod events
2017/06/26 22:49:06.521560 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
2017/06/26 22:49:06.819216 indexing.go:53: DBG  Incoming source value: %!(EXTRA string=/var/log/containers/kibana-k8s-logging-2678538301-9h4qj_kube-instrumentation_kibana-79c3aa71931070601554bec365acfc364c2138756b045e63b1e9d5019d82dd53.log)
2017/06/26 22:49:06.819267 indexing.go:59: DBG  Using container id: %!(EXTRA string=)
2017/06/26 22:49:07.522742 podwatcher.go:82: INFO kubernetes: Watching API for pod events
2017/06/26 22:49:07.525240 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object

Thanks for the help on this.

It seems you are hitting this issue in the client library we use: https://github.com/ericchiang/k8s/issues/46, do you see that error every 1 second or is it sporadic? I just gave it a try in GKE and it's working for me, but definitely I would like to troubleshoot this issue. Can you share some info about your k8s cluster?

As a side note:

The processor expects you to use /var/lib/docker/containers path, as it's easier to extract containers ids from there

do you see that error every 1 second or is it sporadic?

It is regular about every 1 second

Can you share some info about your k8s cluster?

Sure, I'm using GitHub - kubernetes-retired/kubeadm-dind-cluster: [EOL] A Kubernetes multi-node test cluster based on kubeadm to run the cluster locally. All default settings and then the YAML i specified. While diagnosing the problem, I exec into bash on the pod and then run filebeat manually specifying the config until I will get it working and then will abstract that into the YAML.

The processor expects you to use /var/lib/docker/containers path, as it's easier to extract containers ids from there

Unfortunately that path is empty for me.

I've been checking kubeadm-dind-cluster and it seems it's too custom, very oriented to development or light testing.

I'll try to debug the issue as it could affect other scenarios but I would suggest to use a real cluster, or minikube at least, to test this.

Will post here on any finding about this particular issue, thanks for reporting it

I deployed into a cluster running on AWS and both good news and bad news:

  • Good News - the error messages about the API are gone
  • Bad News - no kubernetes.* fields are showing up in the documents created in Elasticsearch

I turned on -d "kubernetes" and the only new logs that show up are ones like the following:

2017/06/27 16:32:17.342180 indexing.go:53: DBG  Incoming source value: %!(EXTRA string=/var/log/containers/weave-net-vb38v_kube-system_weave-npc-63b03c39a5f7c41b6c76b3c1307ff875ca51724d379b6d3decbb0d31fe4abd30.log)
2017/06/27 16:32:17.347153 indexing.go:59: DBG  Using container id: %!(EXTRA string=)
2017/06/27 16:32:18.145902 indexing.go:53: DBG  Incoming source value: %!(EXTRA string=/var/log/containers/weave-net-vb38v_kube-system_weave-npc-63b03c39a5f7c41b6c76b3c1307ff875ca51724d379b6d3decbb0d31fe4abd30.log)
2017/06/27 16:32:18.242385 indexing.go:59: DBG  Using container id: %!(EXTRA string=)
2017/06/27 16:32:19.143055 indexing.go:53: DBG  Incoming source value: %!(EXTRA string=/var/log/containers/weave-net-vb38v_kube-system_weave-npc-63b03c39a5f7c41b6c76b3c1307ff875ca51724d379b6d3decbb0d31fe4abd30.log)
2017/06/27 16:32:19.147099 indexing.go:59: DBG  Using container id: %!(EXTRA string=)

Try with '/var/lib/docker/containers//.log' there

Changing to that path does the trick!

However I'm failing to understand why that should matter. It is optimal for the path to be agnostic of the container engine being used in Kubernetes or where it is located.

Is it assuming that exact path must be present?

Certainly this is something we could add, the code expects /var/lib/docker/containers/ as for now, but to be honest /var/log/containers/* are just symlinks to that.

As you can see we have been working on extending our docker and kubernetes support, both for logging and metrics, you can expect more features like this in the future.

This topic was automatically closed after 21 days. New replies are no longer allowed.