Avoiding data loss with filebeat in a K8S environment

Hello There,

We are in the process of setting up filebeat in the K8S environment running as a DaemonSet. We are evaluating the possibilities of data loss - logs generated by the application are not uploaded to Logstash and are no longer available on the host VM.

Filebeat is configured to scan /var/lib/docker/containers/ for logs. Unless we keep the check_interval very low - 1s, there is a possibility that a container can be GCed by K8S before filebeat can collect last remaining logs. I have looked at the various options (close.*) in the filbeat docs but couldn't find anything.

Is there a way to coordinate between the K8S API servers or the Container runtime and filebeat to make sure the GC happens only after the logs are collected by the logstash?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.