I'm trying to test the new kubernetes metadata features that were added to beats and therefore would like to pull the docker image for the alpha, but it appears it is not being published.
add_kubernetes_metadata was renamed recently (will be the definitive name in next version). In alpha2 it was kubernetes. The rest of the documentation should be valid. I'm checking our published docs.
Thanks! That did the trick, but now receiving the following:
2017/06/26 20:32:42.265027 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/06/26 20:32:42.270519 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
2017/06/26 20:32:43.271056 podwatcher.go:82: INFO kubernetes: Watching API for pod events
2017/06/26 20:32:43.275282 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
2017/06/26 20:32:44.275967 podwatcher.go:82: INFO kubernetes: Watching API for pod events
1.6.X should be supported, could you please share some more details on your setup? I'm interested in how you launch filebeat and what settings are you using. It should be able to access k8s api server, normally this is easy within a pod, but it may require some more params if you are launching it from outside the cluster
Perhaps it's an authentication error with k8s API (with a poor error message)
Apart from that everything is looking good, so I would suggest to enable debug info for kubernetes processor, when launching filebeat add this flag: -d kubernetes
Thanks for the tip. I tried setting that, but still no luck. With debugging enabled, not much else exciting comes out that would help us. The following is the only items that stood out to me:
2017/06/26 22:49:06.519807 podwatcher.go:82: INFO kubernetes: Watching API for pod events
2017/06/26 22:49:06.521560 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
2017/06/26 22:49:06.819216 indexing.go:53: DBG Incoming source value: %!(EXTRA string=/var/log/containers/kibana-k8s-logging-2678538301-9h4qj_kube-instrumentation_kibana-79c3aa71931070601554bec365acfc364c2138756b045e63b1e9d5019d82dd53.log)
2017/06/26 22:49:06.819267 indexing.go:59: DBG Using container id: %!(EXTRA string=)
2017/06/26 22:49:07.522742 podwatcher.go:82: INFO kubernetes: Watching API for pod events
2017/06/26 22:49:07.525240 podwatcher.go:87: ERR kubernetes: Watching API eror decode error status: payload is not a kubernetes protobuf object
It seems you are hitting this issue in the client library we use: https://github.com/ericchiang/k8s/issues/46, do you see that error every 1 second or is it sporadic? I just gave it a try in GKE and it's working for me, but definitely I would like to troubleshoot this issue. Can you share some info about your k8s cluster?
As a side note:
The processor expects you to use /var/lib/docker/containers path, as it's easier to extract containers ids from there
However I'm failing to understand why that should matter. It is optimal for the path to be agnostic of the container engine being used in Kubernetes or where it is located.
Certainly this is something we could add, the code expects /var/lib/docker/containers/ as for now, but to be honest /var/log/containers/* are just symlinks to that.
As you can see we have been working on extending our docker and kubernetes support, both for logging and metrics, you can expect more features like this in the future.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.