Ah, the pains of logging in container environments.
There are a few approaches to your problem. Each with their own advantages and disadvantages. Given you have multiple log files, you might consider to not have a mix of solutions, but one true way:
-
Write all logs to a volume (no logs on stdout/stderr). Advantage: logs do not pass the docker daemon, which imposes a risk of running into an OOM in the daemon. Disadvantage: one must provide some log-rotation to not run out of disk space.
- a) pod local volume + sidecar shipper like filebeat (disadvantage: one filebeat running per container)
- b) global volume or host mount with one global filebeat (disadvantage: no good about disk usage per pod)
-
Always write all logs to stdout/stderr. Disadvantage: all logs pass docker daemon, doesn't work well with software writing multiple logs. Advantage: one global daemonset for shipping logs. Using this approach applications with multiple logs can be integrated by using a shared volume and using 1 or 2 sidecar containers. The first sidecar rotates the logs (if applications log writer does not support rotation) and the second sidecar will print the log to stdout. The 'streaming-sidecar' can be as simple as
tail -f -n+1 <path/to/log>.
See k8s Logging Architecture docs for sample solutions.