Filebeat on docker/kubernetes - delay termination to attempt to clear queue?

I have Filebeat running in a sidecar container, alongside my app container, in a Kubernetes pod. They share an attached volume: the app is writing JSON logs and Filebeat is shipping them.

I have a daemonset Filebeat running which ships stdout/stderr already, this is about the app-specific Filebeat logging/shipping, which might go to an app-specific cloud.elastic.co instance, for example.

This works well on the whole, but if the pod is terminated, both containers receive SIGTERM at the same time it seems, and so if app container takes a couple of seconds to shut down, logging some additional output as it does so, it's not picked up and shipped by the Filebeat container.

Is there any way to:

  1. Configure Filebeat to not exit in response to SIGTERM until there's nothing left to be processed (maybe with timeout?)
  2. Configure the Filebeat docker image to do so?
  3. Configure a Kubernetes pod to delay termination of a particular container in a pod

Nr. 3 seems the least favourable, because it always introduces a delay. That one is also maybe off topic, but I figured it might be a use case that's come up here before!

Hi @Kieren_Johnstone,

At the moment there is no way to configure Filebeat to ignore SIGTERM. This signal is actually handled to stop harvesters and update the registry in an ordered way, so no messages are lost on normal shutdown. There is a shutdown_timeout option you could try, but this is intended to give a time to finish sending events that have been already read.

One thing you can try is to create your own custom Filebeat image, with an entrypoint that wraps filebeat and ignores the signal, or delays the termination of the process.

You could also consider logging everything to stdout/stderr, and have multiple filebeat daemonsets running, one for each kind of app-specific logs you have.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.