Ship Kubernetes metadata with Filebeat

Hi,

We have an established Filebeat ==(lumberjack)==> Logstash => Elasticsearch deployment for Docker container logs coming from a bespoke orchestration platform. We are now replacing our bespoke platform with Kubernetes.

We have Filebeat configured to run one instance per cluster-node to harvest every Kubernetes container log (from /var/log/containers/) where the log files are named by Kubernetes convention based on the Kubernetes Namespace, Pod, and container names. Filebeat is configured very similarly to the Canonical Kubernetes distribution.

When the logs arrive at Logstash we can grok the Namespace, Pod, and containers names from the filename field but we do not have any of the Kubernetes annotations/tags/metadata.

Another popular distribution of Kubernetes uses Fluentd instead of Beats to ship logs. In this distribution, there is a Fluentd Kubernetes metadata plugin to gather the Kubernetes metadata to add to the logs before shipping. Sadly Fluentd does not appear to have any ability to send to logs to Logstash via the lumberjack protocol (although there are plugins to ship the other direction, Logstash to Fluentd) and Filebeat does not appear to have a hook to inject the same information.

At the moment our path forward seems to be to regenerate the Filebeat configuration file each time a new Pod is provisioned so we can statically specify all the log file names with their metadata as fields and then restart Filebeat.

Do you have any better suggestions for shipping Kubernetes metadata with Filebeat? Are there any future plans to address this metadata issue for Kubernetes, and potentially other platforms?

Regards,

Jason

2 Likes

We are aware of the issue with the meta data and it is definitively something we are thinking about but also the creators of Kubernetes as a "remote" lookup can have quite some consequences. Currently the best way is probably to generate your own filebeat configs and add the meta data. To not require restarts all the time we are working on dynamic reloading of config files: https://github.com/elastic/beats/pull/3362 Could this be helpful in your case?

We have been wary of restarting Filebeat on configuration changes as this has been the source of various re-shipping and registry state bugs for us (most of which seem to have been fixed since we adopted v1.x Filebeat). These bugs though have been suggestive that restarting Filebeat regularly is not a use case that is common and we'd like to avoid.

If dynamic reloading of configuration proves to be resistant to this class of bugs, or is better tested, then it may be a good answer and we would definitely try it.

I would not recommend to constantly restart filebeat. To prevent the duplicated data issues, you should use shutdown_timeout. This should solve the problem (as long as your network is mostly reliable).

The prospector reloading will still have the challenge of when to exactly stop a prospector that is removed. But with reloading especially in the docker / kubernetes use case I mainly expect that new prospectors show up and old disappear which do not have overlapping file paths. In all these cases there shouldn't be any issues.

If you try out the feature, please let me know if you hit any issues.

This topic was automatically closed after 21 days. New replies are no longer allowed.