Filebeat missing end of logs for k8s pods

I have noticed a curious issue where if my kubernetes pod terminates too fast filebeat does not capture the end of the associated log. Anyone have this issue and any workaround to ensure filebeats can capture all the log?


This is an interesting situation!

I guess that you are using autodiscover? If so here is what might happen:
Filebeat will catch a stop event from k8s/docker API accordingly and will stop collecting logs for the specific container/pod. This is how autodiscover works. I guess that you run into this "racecondition".

A workaround for this would be to have a sidecar container with filebeat, so as to ensure that all logs will be collected.

Coming back to autodiscover, it would be more than welcome if you could collect the information about the issue you see and structure a Github issue for this, so as devs to investigate it and maybe provide a fix.


Hey Chris,
Thank you for the response on this. Yeah we are using autodiscover with filebeat running in a daemonset. So that sounds exactly like what is happening in our case.

We were hoping to avoid running the sidecar log collector and just be able to use the daemonset but may have to explore the sidecar option. Would you have any posts that outline a good sidecar solution with either filebeat or even logstash that I might be able to reference?

I will gather some info and open a GitHub issue as soon as I can.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.